00:00:00.001 Started by upstream project "autotest-per-patch" build number 122918 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.028 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.064 Using shallow fetch with depth 1 00:00:00.064 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.064 > git --version # timeout=10 00:00:00.087 > git --version # 'git version 2.39.2' 00:00:00.087 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.088 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.088 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.073 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.084 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.095 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.095 > git config core.sparsecheckout # timeout=10 00:00:04.105 > git read-tree -mu HEAD # timeout=10 00:00:04.122 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:04.142 Commit message: "inventory/dev: add missing long names" 00:00:04.142 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:04.267 [Pipeline] Start of Pipeline 00:00:04.283 [Pipeline] library 00:00:04.285 Loading library shm_lib@master 00:00:04.285 Library shm_lib@master is cached. Copying from home. 00:00:04.305 [Pipeline] node 00:00:04.313 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.314 [Pipeline] { 00:00:04.323 [Pipeline] catchError 00:00:04.324 [Pipeline] { 00:00:04.335 [Pipeline] wrap 00:00:04.344 [Pipeline] { 00:00:04.353 [Pipeline] stage 00:00:04.355 [Pipeline] { (Prologue) 00:00:04.564 [Pipeline] sh 00:00:04.846 + logger -p user.info -t JENKINS-CI 00:00:04.862 [Pipeline] echo 00:00:04.864 Node: WFP22 00:00:04.870 [Pipeline] sh 00:00:05.166 [Pipeline] setCustomBuildProperty 00:00:05.184 [Pipeline] echo 00:00:05.186 Cleanup processes 00:00:05.191 [Pipeline] sh 00:00:05.472 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.472 3448360 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.486 [Pipeline] sh 00:00:05.772 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.772 ++ grep -v 'sudo pgrep' 00:00:05.772 ++ awk '{print $1}' 00:00:05.772 + sudo kill -9 00:00:05.772 + true 00:00:05.788 [Pipeline] cleanWs 00:00:05.799 [WS-CLEANUP] Deleting project workspace... 00:00:05.799 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.805 [WS-CLEANUP] done 00:00:05.810 [Pipeline] setCustomBuildProperty 00:00:05.819 [Pipeline] sh 00:00:06.100 + sudo git config --global --replace-all safe.directory '*' 00:00:06.177 [Pipeline] nodesByLabel 00:00:06.178 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.187 [Pipeline] httpRequest 00:00:06.191 HttpMethod: GET 00:00:06.191 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.198 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.205 Response Code: HTTP/1.1 200 OK 00:00:06.205 Success: Status code 200 is in the accepted range: 200,404 00:00:06.206 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.553 [Pipeline] sh 00:00:07.836 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.856 [Pipeline] httpRequest 00:00:07.860 HttpMethod: GET 00:00:07.861 URL: http://10.211.164.101/packages/spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:00:07.861 Sending request to url: http://10.211.164.101/packages/spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:00:07.875 Response Code: HTTP/1.1 200 OK 00:00:07.876 Success: Status code 200 is in the accepted range: 200,404 00:00:07.876 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:00:36.117 [Pipeline] sh 00:00:36.406 + tar --no-same-owner -xf spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:00:38.961 [Pipeline] sh 00:00:39.256 + git -C spdk log --oneline -n5 00:00:39.256 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:00:39.256 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:00:39.256 7a8d39909 Revert "test/common: Enable inherit_errexit" 00:00:39.256 4506c0c36 test/common: Enable inherit_errexit 00:00:39.256 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:39.297 [Pipeline] } 00:00:39.310 [Pipeline] // stage 00:00:39.319 [Pipeline] stage 00:00:39.321 [Pipeline] { (Prepare) 00:00:39.332 [Pipeline] writeFile 00:00:39.342 [Pipeline] sh 00:00:39.621 + logger -p user.info -t JENKINS-CI 00:00:39.634 [Pipeline] sh 00:00:39.919 + logger -p user.info -t JENKINS-CI 00:00:39.941 [Pipeline] sh 00:00:40.223 + cat autorun-spdk.conf 00:00:40.223 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.223 SPDK_TEST_NVMF=1 00:00:40.223 SPDK_TEST_NVME_CLI=1 00:00:40.223 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.223 SPDK_TEST_NVMF_NICS=e810 00:00:40.223 SPDK_TEST_VFIOUSER=1 00:00:40.223 SPDK_RUN_UBSAN=1 00:00:40.223 NET_TYPE=phy 00:00:40.231 RUN_NIGHTLY=0 00:00:40.235 [Pipeline] readFile 00:00:40.261 [Pipeline] withEnv 00:00:40.263 [Pipeline] { 00:00:40.277 [Pipeline] sh 00:00:40.556 + set -ex 00:00:40.556 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:40.556 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:40.556 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.556 ++ SPDK_TEST_NVMF=1 00:00:40.556 ++ SPDK_TEST_NVME_CLI=1 00:00:40.556 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.556 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.556 ++ SPDK_TEST_VFIOUSER=1 00:00:40.556 ++ SPDK_RUN_UBSAN=1 00:00:40.556 ++ NET_TYPE=phy 00:00:40.556 ++ RUN_NIGHTLY=0 00:00:40.557 + case $SPDK_TEST_NVMF_NICS in 00:00:40.557 + DRIVERS=ice 00:00:40.557 + [[ tcp == \r\d\m\a ]] 00:00:40.557 + [[ -n ice ]] 00:00:40.557 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:40.557 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:40.557 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:40.557 rmmod: ERROR: Module irdma is not currently loaded 00:00:40.557 rmmod: ERROR: Module i40iw is not currently loaded 00:00:40.557 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:40.557 + true 00:00:40.557 + for D in $DRIVERS 00:00:40.557 + sudo modprobe ice 00:00:40.557 + exit 0 00:00:40.566 [Pipeline] } 00:00:40.584 [Pipeline] // withEnv 00:00:40.589 [Pipeline] } 00:00:40.605 [Pipeline] // stage 00:00:40.613 [Pipeline] catchError 00:00:40.615 [Pipeline] { 00:00:40.629 [Pipeline] timeout 00:00:40.630 Timeout set to expire in 40 min 00:00:40.632 [Pipeline] { 00:00:40.647 [Pipeline] stage 00:00:40.649 [Pipeline] { (Tests) 00:00:40.665 [Pipeline] sh 00:00:40.951 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.951 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.951 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.951 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:40.951 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.951 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.951 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:40.951 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.951 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.951 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.951 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.951 + source /etc/os-release 00:00:40.951 ++ NAME='Fedora Linux' 00:00:40.951 ++ VERSION='38 (Cloud Edition)' 00:00:40.951 ++ ID=fedora 00:00:40.951 ++ VERSION_ID=38 00:00:40.951 ++ VERSION_CODENAME= 00:00:40.951 ++ PLATFORM_ID=platform:f38 00:00:40.951 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:40.951 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:40.951 ++ LOGO=fedora-logo-icon 00:00:40.951 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:40.951 ++ HOME_URL=https://fedoraproject.org/ 00:00:40.951 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:40.951 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:40.951 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:40.951 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:40.951 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:40.951 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:40.951 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:40.951 ++ SUPPORT_END=2024-05-14 00:00:40.951 ++ VARIANT='Cloud Edition' 00:00:40.951 ++ VARIANT_ID=cloud 00:00:40.951 + uname -a 00:00:40.951 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:40.951 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:43.487 Hugepages 00:00:43.487 node hugesize free / total 00:00:43.487 node0 1048576kB 0 / 0 00:00:43.487 node0 2048kB 0 / 0 00:00:43.487 node1 1048576kB 0 / 0 00:00:43.487 node1 2048kB 0 / 0 00:00:43.487 00:00:43.487 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:43.487 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:43.487 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:43.487 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:43.487 + rm -f /tmp/spdk-ld-path 00:00:43.487 + source autorun-spdk.conf 00:00:43.487 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.487 ++ SPDK_TEST_NVMF=1 00:00:43.487 ++ SPDK_TEST_NVME_CLI=1 00:00:43.487 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.487 ++ SPDK_TEST_NVMF_NICS=e810 00:00:43.487 ++ SPDK_TEST_VFIOUSER=1 00:00:43.487 ++ SPDK_RUN_UBSAN=1 00:00:43.487 ++ NET_TYPE=phy 00:00:43.487 ++ RUN_NIGHTLY=0 00:00:43.487 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:43.487 + [[ -n '' ]] 00:00:43.487 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.487 + for M in /var/spdk/build-*-manifest.txt 00:00:43.487 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:43.487 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:43.487 + for M in /var/spdk/build-*-manifest.txt 00:00:43.487 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:43.487 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:43.487 ++ uname 00:00:43.487 + [[ Linux == \L\i\n\u\x ]] 00:00:43.487 + sudo dmesg -T 00:00:43.487 + sudo dmesg --clear 00:00:43.487 + dmesg_pid=3449265 00:00:43.487 + [[ Fedora Linux == FreeBSD ]] 00:00:43.487 + sudo dmesg -Tw 00:00:43.487 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:43.487 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:43.488 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:43.488 + [[ -x /usr/src/fio-static/fio ]] 00:00:43.488 + export FIO_BIN=/usr/src/fio-static/fio 00:00:43.488 + FIO_BIN=/usr/src/fio-static/fio 00:00:43.488 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:43.488 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:43.488 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:43.488 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:43.488 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:43.488 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:43.488 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:43.488 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:43.488 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:43.488 Test configuration: 00:00:43.488 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.488 SPDK_TEST_NVMF=1 00:00:43.488 SPDK_TEST_NVME_CLI=1 00:00:43.488 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.488 SPDK_TEST_NVMF_NICS=e810 00:00:43.488 SPDK_TEST_VFIOUSER=1 00:00:43.488 SPDK_RUN_UBSAN=1 00:00:43.488 NET_TYPE=phy 00:00:43.747 RUN_NIGHTLY=0 15:38:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:43.747 15:38:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:43.747 15:38:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:43.747 15:38:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:43.747 15:38:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.747 15:38:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.747 15:38:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.747 15:38:42 -- paths/export.sh@5 -- $ export PATH 00:00:43.747 15:38:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.748 15:38:42 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:43.748 15:38:42 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:43.748 15:38:42 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715780322.XXXXXX 00:00:43.748 15:38:42 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715780322.sYglX0 00:00:43.748 15:38:42 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:43.748 15:38:42 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:43.748 15:38:42 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:43.748 15:38:42 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:43.748 15:38:42 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:43.748 15:38:42 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:43.748 15:38:42 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:43.748 15:38:42 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.748 15:38:42 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:43.748 15:38:42 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:43.748 15:38:42 -- pm/common@17 -- $ local monitor 00:00:43.748 15:38:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.748 15:38:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.748 15:38:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.748 15:38:42 -- pm/common@21 -- $ date +%s 00:00:43.748 15:38:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.748 15:38:42 -- pm/common@21 -- $ date +%s 00:00:43.748 15:38:42 -- pm/common@21 -- $ date +%s 00:00:43.748 15:38:42 -- pm/common@25 -- $ sleep 1 00:00:43.748 15:38:42 -- pm/common@21 -- $ date +%s 00:00:43.748 15:38:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715780322 00:00:43.748 15:38:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715780322 00:00:43.748 15:38:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715780322 00:00:43.748 15:38:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715780322 00:00:43.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715780322_collect-cpu-temp.pm.log 00:00:43.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715780322_collect-vmstat.pm.log 00:00:43.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715780322_collect-cpu-load.pm.log 00:00:43.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715780322_collect-bmc-pm.bmc.pm.log 00:00:44.685 15:38:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:44.685 15:38:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:44.685 15:38:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:44.685 15:38:43 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:44.685 15:38:43 -- spdk/autobuild.sh@16 -- $ date -u 00:00:44.685 Wed May 15 01:38:43 PM UTC 2024 00:00:44.685 15:38:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:44.685 v24.05-pre-661-gc3870302f 00:00:44.685 15:38:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:44.685 15:38:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:44.685 15:38:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:44.685 15:38:43 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:44.685 15:38:43 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:44.685 15:38:43 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.685 ************************************ 00:00:44.685 START TEST ubsan 00:00:44.685 ************************************ 00:00:44.685 15:38:43 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:44.685 using ubsan 00:00:44.685 00:00:44.685 real 0m0.001s 00:00:44.685 user 0m0.000s 00:00:44.685 sys 0m0.000s 00:00:44.685 15:38:43 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:44.685 15:38:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:44.685 ************************************ 00:00:44.685 END TEST ubsan 00:00:44.685 ************************************ 00:00:44.685 15:38:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:44.685 15:38:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:44.685 15:38:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:44.685 15:38:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:44.685 15:38:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:44.685 15:38:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:44.685 15:38:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:44.685 15:38:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:44.685 15:38:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:44.944 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:44.944 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:45.512 Using 'verbs' RDMA provider 00:01:00.975 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:13.189 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:13.189 Creating mk/config.mk...done. 00:01:13.189 Creating mk/cc.flags.mk...done. 00:01:13.189 Type 'make' to build. 00:01:13.189 15:39:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:13.189 15:39:10 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:13.189 15:39:10 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:13.189 15:39:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.189 ************************************ 00:01:13.189 START TEST make 00:01:13.189 ************************************ 00:01:13.189 15:39:11 make -- common/autotest_common.sh@1121 -- $ make -j112 00:01:13.189 make[1]: Nothing to be done for 'all'. 00:01:14.137 The Meson build system 00:01:14.137 Version: 1.3.1 00:01:14.137 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:14.137 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:14.137 Build type: native build 00:01:14.137 Project name: libvfio-user 00:01:14.137 Project version: 0.0.1 00:01:14.137 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:14.137 C linker for the host machine: cc ld.bfd 2.39-16 00:01:14.137 Host machine cpu family: x86_64 00:01:14.137 Host machine cpu: x86_64 00:01:14.137 Run-time dependency threads found: YES 00:01:14.137 Library dl found: YES 00:01:14.137 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:14.137 Run-time dependency json-c found: YES 0.17 00:01:14.137 Run-time dependency cmocka found: YES 1.1.7 00:01:14.137 Program pytest-3 found: NO 00:01:14.137 Program flake8 found: NO 00:01:14.137 Program misspell-fixer found: NO 00:01:14.137 Program restructuredtext-lint found: NO 00:01:14.137 Program valgrind found: YES (/usr/bin/valgrind) 00:01:14.137 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:14.137 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:14.137 Compiler for C supports arguments -Wwrite-strings: YES 00:01:14.137 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.137 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:14.137 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:14.137 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.137 Build targets in project: 8 00:01:14.137 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:14.137 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:14.137 00:01:14.137 libvfio-user 0.0.1 00:01:14.137 00:01:14.137 User defined options 00:01:14.137 buildtype : debug 00:01:14.137 default_library: shared 00:01:14.137 libdir : /usr/local/lib 00:01:14.137 00:01:14.137 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:14.704 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:14.704 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:14.704 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:14.704 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:14.704 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:14.704 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:14.704 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:14.704 [7/37] Compiling C object samples/null.p/null.c.o 00:01:14.704 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:14.704 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:14.704 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:14.704 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:14.704 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:14.704 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:14.704 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:14.704 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:14.704 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:14.704 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:14.962 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:14.962 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:14.962 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:14.962 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:14.962 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:14.963 [23/37] Compiling C object samples/client.p/client.c.o 00:01:14.963 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:14.963 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:14.963 [26/37] Compiling C object samples/server.p/server.c.o 00:01:14.963 [27/37] Linking target samples/client 00:01:14.963 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:14.963 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:14.963 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:14.963 [31/37] Linking target test/unit_tests 00:01:15.221 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:15.221 [33/37] Linking target samples/server 00:01:15.221 [34/37] Linking target samples/null 00:01:15.221 [35/37] Linking target samples/gpio-pci-idio-16 00:01:15.221 [36/37] Linking target samples/lspci 00:01:15.221 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:15.221 INFO: autodetecting backend as ninja 00:01:15.221 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:15.221 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:15.479 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:15.479 ninja: no work to do. 00:01:20.749 The Meson build system 00:01:20.749 Version: 1.3.1 00:01:20.749 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:20.749 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:20.749 Build type: native build 00:01:20.749 Program cat found: YES (/usr/bin/cat) 00:01:20.749 Project name: DPDK 00:01:20.749 Project version: 23.11.0 00:01:20.749 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:20.749 C linker for the host machine: cc ld.bfd 2.39-16 00:01:20.749 Host machine cpu family: x86_64 00:01:20.749 Host machine cpu: x86_64 00:01:20.749 Message: ## Building in Developer Mode ## 00:01:20.749 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:20.749 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:20.749 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:20.749 Program python3 found: YES (/usr/bin/python3) 00:01:20.749 Program cat found: YES (/usr/bin/cat) 00:01:20.749 Compiler for C supports arguments -march=native: YES 00:01:20.749 Checking for size of "void *" : 8 00:01:20.749 Checking for size of "void *" : 8 (cached) 00:01:20.749 Library m found: YES 00:01:20.749 Library numa found: YES 00:01:20.749 Has header "numaif.h" : YES 00:01:20.749 Library fdt found: NO 00:01:20.749 Library execinfo found: NO 00:01:20.749 Has header "execinfo.h" : YES 00:01:20.749 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:20.749 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:20.749 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:20.749 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:20.749 Run-time dependency openssl found: YES 3.0.9 00:01:20.749 Run-time dependency libpcap found: YES 1.10.4 00:01:20.749 Has header "pcap.h" with dependency libpcap: YES 00:01:20.749 Compiler for C supports arguments -Wcast-qual: YES 00:01:20.749 Compiler for C supports arguments -Wdeprecated: YES 00:01:20.749 Compiler for C supports arguments -Wformat: YES 00:01:20.749 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:20.749 Compiler for C supports arguments -Wformat-security: NO 00:01:20.749 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:20.749 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:20.749 Compiler for C supports arguments -Wnested-externs: YES 00:01:20.749 Compiler for C supports arguments -Wold-style-definition: YES 00:01:20.749 Compiler for C supports arguments -Wpointer-arith: YES 00:01:20.749 Compiler for C supports arguments -Wsign-compare: YES 00:01:20.749 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:20.749 Compiler for C supports arguments -Wundef: YES 00:01:20.749 Compiler for C supports arguments -Wwrite-strings: YES 00:01:20.749 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:20.749 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:20.749 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:20.749 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:20.749 Program objdump found: YES (/usr/bin/objdump) 00:01:20.749 Compiler for C supports arguments -mavx512f: YES 00:01:20.749 Checking if "AVX512 checking" compiles: YES 00:01:20.749 Fetching value of define "__SSE4_2__" : 1 00:01:20.749 Fetching value of define "__AES__" : 1 00:01:20.749 Fetching value of define "__AVX__" : 1 00:01:20.749 Fetching value of define "__AVX2__" : 1 00:01:20.749 Fetching value of define "__AVX512BW__" : 1 00:01:20.749 Fetching value of define "__AVX512CD__" : 1 00:01:20.749 Fetching value of define "__AVX512DQ__" : 1 00:01:20.749 Fetching value of define "__AVX512F__" : 1 00:01:20.749 Fetching value of define "__AVX512VL__" : 1 00:01:20.749 Fetching value of define "__PCLMUL__" : 1 00:01:20.749 Fetching value of define "__RDRND__" : 1 00:01:20.749 Fetching value of define "__RDSEED__" : 1 00:01:20.749 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:20.749 Fetching value of define "__znver1__" : (undefined) 00:01:20.749 Fetching value of define "__znver2__" : (undefined) 00:01:20.749 Fetching value of define "__znver3__" : (undefined) 00:01:20.749 Fetching value of define "__znver4__" : (undefined) 00:01:20.749 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:20.749 Message: lib/log: Defining dependency "log" 00:01:20.749 Message: lib/kvargs: Defining dependency "kvargs" 00:01:20.749 Message: lib/telemetry: Defining dependency "telemetry" 00:01:20.749 Checking for function "getentropy" : NO 00:01:20.749 Message: lib/eal: Defining dependency "eal" 00:01:20.749 Message: lib/ring: Defining dependency "ring" 00:01:20.749 Message: lib/rcu: Defining dependency "rcu" 00:01:20.749 Message: lib/mempool: Defining dependency "mempool" 00:01:20.749 Message: lib/mbuf: Defining dependency "mbuf" 00:01:20.749 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:20.749 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:20.749 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:20.749 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:20.749 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:20.749 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:20.749 Compiler for C supports arguments -mpclmul: YES 00:01:20.749 Compiler for C supports arguments -maes: YES 00:01:20.749 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:20.749 Compiler for C supports arguments -mavx512bw: YES 00:01:20.749 Compiler for C supports arguments -mavx512dq: YES 00:01:20.749 Compiler for C supports arguments -mavx512vl: YES 00:01:20.749 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:20.749 Compiler for C supports arguments -mavx2: YES 00:01:20.749 Compiler for C supports arguments -mavx: YES 00:01:20.749 Message: lib/net: Defining dependency "net" 00:01:20.749 Message: lib/meter: Defining dependency "meter" 00:01:20.749 Message: lib/ethdev: Defining dependency "ethdev" 00:01:20.749 Message: lib/pci: Defining dependency "pci" 00:01:20.749 Message: lib/cmdline: Defining dependency "cmdline" 00:01:20.749 Message: lib/hash: Defining dependency "hash" 00:01:20.749 Message: lib/timer: Defining dependency "timer" 00:01:20.749 Message: lib/compressdev: Defining dependency "compressdev" 00:01:20.749 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:20.749 Message: lib/dmadev: Defining dependency "dmadev" 00:01:20.749 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:20.749 Message: lib/power: Defining dependency "power" 00:01:20.749 Message: lib/reorder: Defining dependency "reorder" 00:01:20.749 Message: lib/security: Defining dependency "security" 00:01:20.749 Has header "linux/userfaultfd.h" : YES 00:01:20.749 Has header "linux/vduse.h" : YES 00:01:20.749 Message: lib/vhost: Defining dependency "vhost" 00:01:20.749 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:20.749 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:20.749 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:20.749 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:20.749 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:20.749 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:20.749 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:20.749 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:20.749 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:20.750 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:20.750 Program doxygen found: YES (/usr/bin/doxygen) 00:01:20.750 Configuring doxy-api-html.conf using configuration 00:01:20.750 Configuring doxy-api-man.conf using configuration 00:01:20.750 Program mandb found: YES (/usr/bin/mandb) 00:01:20.750 Program sphinx-build found: NO 00:01:20.750 Configuring rte_build_config.h using configuration 00:01:20.750 Message: 00:01:20.750 ================= 00:01:20.750 Applications Enabled 00:01:20.750 ================= 00:01:20.750 00:01:20.750 apps: 00:01:20.750 00:01:20.750 00:01:20.750 Message: 00:01:20.750 ================= 00:01:20.750 Libraries Enabled 00:01:20.750 ================= 00:01:20.750 00:01:20.750 libs: 00:01:20.750 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:20.750 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:20.750 cryptodev, dmadev, power, reorder, security, vhost, 00:01:20.750 00:01:20.750 Message: 00:01:20.750 =============== 00:01:20.750 Drivers Enabled 00:01:20.750 =============== 00:01:20.750 00:01:20.750 common: 00:01:20.750 00:01:20.750 bus: 00:01:20.750 pci, vdev, 00:01:20.750 mempool: 00:01:20.750 ring, 00:01:20.750 dma: 00:01:20.750 00:01:20.750 net: 00:01:20.750 00:01:20.750 crypto: 00:01:20.750 00:01:20.750 compress: 00:01:20.750 00:01:20.750 vdpa: 00:01:20.750 00:01:20.750 00:01:20.750 Message: 00:01:20.750 ================= 00:01:20.750 Content Skipped 00:01:20.750 ================= 00:01:20.750 00:01:20.750 apps: 00:01:20.750 dumpcap: explicitly disabled via build config 00:01:20.750 graph: explicitly disabled via build config 00:01:20.750 pdump: explicitly disabled via build config 00:01:20.750 proc-info: explicitly disabled via build config 00:01:20.750 test-acl: explicitly disabled via build config 00:01:20.750 test-bbdev: explicitly disabled via build config 00:01:20.750 test-cmdline: explicitly disabled via build config 00:01:20.750 test-compress-perf: explicitly disabled via build config 00:01:20.750 test-crypto-perf: explicitly disabled via build config 00:01:20.750 test-dma-perf: explicitly disabled via build config 00:01:20.750 test-eventdev: explicitly disabled via build config 00:01:20.750 test-fib: explicitly disabled via build config 00:01:20.750 test-flow-perf: explicitly disabled via build config 00:01:20.750 test-gpudev: explicitly disabled via build config 00:01:20.750 test-mldev: explicitly disabled via build config 00:01:20.750 test-pipeline: explicitly disabled via build config 00:01:20.750 test-pmd: explicitly disabled via build config 00:01:20.750 test-regex: explicitly disabled via build config 00:01:20.750 test-sad: explicitly disabled via build config 00:01:20.750 test-security-perf: explicitly disabled via build config 00:01:20.750 00:01:20.750 libs: 00:01:20.750 metrics: explicitly disabled via build config 00:01:20.750 acl: explicitly disabled via build config 00:01:20.750 bbdev: explicitly disabled via build config 00:01:20.750 bitratestats: explicitly disabled via build config 00:01:20.750 bpf: explicitly disabled via build config 00:01:20.750 cfgfile: explicitly disabled via build config 00:01:20.750 distributor: explicitly disabled via build config 00:01:20.750 efd: explicitly disabled via build config 00:01:20.750 eventdev: explicitly disabled via build config 00:01:20.750 dispatcher: explicitly disabled via build config 00:01:20.750 gpudev: explicitly disabled via build config 00:01:20.750 gro: explicitly disabled via build config 00:01:20.750 gso: explicitly disabled via build config 00:01:20.750 ip_frag: explicitly disabled via build config 00:01:20.750 jobstats: explicitly disabled via build config 00:01:20.750 latencystats: explicitly disabled via build config 00:01:20.750 lpm: explicitly disabled via build config 00:01:20.750 member: explicitly disabled via build config 00:01:20.750 pcapng: explicitly disabled via build config 00:01:20.750 rawdev: explicitly disabled via build config 00:01:20.750 regexdev: explicitly disabled via build config 00:01:20.750 mldev: explicitly disabled via build config 00:01:20.750 rib: explicitly disabled via build config 00:01:20.750 sched: explicitly disabled via build config 00:01:20.750 stack: explicitly disabled via build config 00:01:20.750 ipsec: explicitly disabled via build config 00:01:20.750 pdcp: explicitly disabled via build config 00:01:20.750 fib: explicitly disabled via build config 00:01:20.750 port: explicitly disabled via build config 00:01:20.750 pdump: explicitly disabled via build config 00:01:20.750 table: explicitly disabled via build config 00:01:20.750 pipeline: explicitly disabled via build config 00:01:20.750 graph: explicitly disabled via build config 00:01:20.750 node: explicitly disabled via build config 00:01:20.750 00:01:20.750 drivers: 00:01:20.750 common/cpt: not in enabled drivers build config 00:01:20.750 common/dpaax: not in enabled drivers build config 00:01:20.750 common/iavf: not in enabled drivers build config 00:01:20.750 common/idpf: not in enabled drivers build config 00:01:20.750 common/mvep: not in enabled drivers build config 00:01:20.750 common/octeontx: not in enabled drivers build config 00:01:20.750 bus/auxiliary: not in enabled drivers build config 00:01:20.750 bus/cdx: not in enabled drivers build config 00:01:20.750 bus/dpaa: not in enabled drivers build config 00:01:20.750 bus/fslmc: not in enabled drivers build config 00:01:20.750 bus/ifpga: not in enabled drivers build config 00:01:20.750 bus/platform: not in enabled drivers build config 00:01:20.750 bus/vmbus: not in enabled drivers build config 00:01:20.750 common/cnxk: not in enabled drivers build config 00:01:20.750 common/mlx5: not in enabled drivers build config 00:01:20.750 common/nfp: not in enabled drivers build config 00:01:20.750 common/qat: not in enabled drivers build config 00:01:20.750 common/sfc_efx: not in enabled drivers build config 00:01:20.750 mempool/bucket: not in enabled drivers build config 00:01:20.750 mempool/cnxk: not in enabled drivers build config 00:01:20.750 mempool/dpaa: not in enabled drivers build config 00:01:20.750 mempool/dpaa2: not in enabled drivers build config 00:01:20.750 mempool/octeontx: not in enabled drivers build config 00:01:20.750 mempool/stack: not in enabled drivers build config 00:01:20.750 dma/cnxk: not in enabled drivers build config 00:01:20.750 dma/dpaa: not in enabled drivers build config 00:01:20.750 dma/dpaa2: not in enabled drivers build config 00:01:20.750 dma/hisilicon: not in enabled drivers build config 00:01:20.750 dma/idxd: not in enabled drivers build config 00:01:20.750 dma/ioat: not in enabled drivers build config 00:01:20.750 dma/skeleton: not in enabled drivers build config 00:01:20.750 net/af_packet: not in enabled drivers build config 00:01:20.750 net/af_xdp: not in enabled drivers build config 00:01:20.750 net/ark: not in enabled drivers build config 00:01:20.750 net/atlantic: not in enabled drivers build config 00:01:20.750 net/avp: not in enabled drivers build config 00:01:20.750 net/axgbe: not in enabled drivers build config 00:01:20.750 net/bnx2x: not in enabled drivers build config 00:01:20.750 net/bnxt: not in enabled drivers build config 00:01:20.750 net/bonding: not in enabled drivers build config 00:01:20.750 net/cnxk: not in enabled drivers build config 00:01:20.750 net/cpfl: not in enabled drivers build config 00:01:20.750 net/cxgbe: not in enabled drivers build config 00:01:20.750 net/dpaa: not in enabled drivers build config 00:01:20.750 net/dpaa2: not in enabled drivers build config 00:01:20.750 net/e1000: not in enabled drivers build config 00:01:20.750 net/ena: not in enabled drivers build config 00:01:20.750 net/enetc: not in enabled drivers build config 00:01:20.750 net/enetfec: not in enabled drivers build config 00:01:20.750 net/enic: not in enabled drivers build config 00:01:20.750 net/failsafe: not in enabled drivers build config 00:01:20.750 net/fm10k: not in enabled drivers build config 00:01:20.750 net/gve: not in enabled drivers build config 00:01:20.750 net/hinic: not in enabled drivers build config 00:01:20.750 net/hns3: not in enabled drivers build config 00:01:20.750 net/i40e: not in enabled drivers build config 00:01:20.750 net/iavf: not in enabled drivers build config 00:01:20.750 net/ice: not in enabled drivers build config 00:01:20.750 net/idpf: not in enabled drivers build config 00:01:20.750 net/igc: not in enabled drivers build config 00:01:20.750 net/ionic: not in enabled drivers build config 00:01:20.750 net/ipn3ke: not in enabled drivers build config 00:01:20.750 net/ixgbe: not in enabled drivers build config 00:01:20.750 net/mana: not in enabled drivers build config 00:01:20.750 net/memif: not in enabled drivers build config 00:01:20.750 net/mlx4: not in enabled drivers build config 00:01:20.750 net/mlx5: not in enabled drivers build config 00:01:20.750 net/mvneta: not in enabled drivers build config 00:01:20.750 net/mvpp2: not in enabled drivers build config 00:01:20.750 net/netvsc: not in enabled drivers build config 00:01:20.750 net/nfb: not in enabled drivers build config 00:01:20.750 net/nfp: not in enabled drivers build config 00:01:20.750 net/ngbe: not in enabled drivers build config 00:01:20.750 net/null: not in enabled drivers build config 00:01:20.750 net/octeontx: not in enabled drivers build config 00:01:20.750 net/octeon_ep: not in enabled drivers build config 00:01:20.750 net/pcap: not in enabled drivers build config 00:01:20.750 net/pfe: not in enabled drivers build config 00:01:20.750 net/qede: not in enabled drivers build config 00:01:20.750 net/ring: not in enabled drivers build config 00:01:20.750 net/sfc: not in enabled drivers build config 00:01:20.750 net/softnic: not in enabled drivers build config 00:01:20.750 net/tap: not in enabled drivers build config 00:01:20.750 net/thunderx: not in enabled drivers build config 00:01:20.750 net/txgbe: not in enabled drivers build config 00:01:20.750 net/vdev_netvsc: not in enabled drivers build config 00:01:20.750 net/vhost: not in enabled drivers build config 00:01:20.750 net/virtio: not in enabled drivers build config 00:01:20.750 net/vmxnet3: not in enabled drivers build config 00:01:20.750 raw/*: missing internal dependency, "rawdev" 00:01:20.750 crypto/armv8: not in enabled drivers build config 00:01:20.750 crypto/bcmfs: not in enabled drivers build config 00:01:20.750 crypto/caam_jr: not in enabled drivers build config 00:01:20.750 crypto/ccp: not in enabled drivers build config 00:01:20.750 crypto/cnxk: not in enabled drivers build config 00:01:20.750 crypto/dpaa_sec: not in enabled drivers build config 00:01:20.750 crypto/dpaa2_sec: not in enabled drivers build config 00:01:20.750 crypto/ipsec_mb: not in enabled drivers build config 00:01:20.751 crypto/mlx5: not in enabled drivers build config 00:01:20.751 crypto/mvsam: not in enabled drivers build config 00:01:20.751 crypto/nitrox: not in enabled drivers build config 00:01:20.751 crypto/null: not in enabled drivers build config 00:01:20.751 crypto/octeontx: not in enabled drivers build config 00:01:20.751 crypto/openssl: not in enabled drivers build config 00:01:20.751 crypto/scheduler: not in enabled drivers build config 00:01:20.751 crypto/uadk: not in enabled drivers build config 00:01:20.751 crypto/virtio: not in enabled drivers build config 00:01:20.751 compress/isal: not in enabled drivers build config 00:01:20.751 compress/mlx5: not in enabled drivers build config 00:01:20.751 compress/octeontx: not in enabled drivers build config 00:01:20.751 compress/zlib: not in enabled drivers build config 00:01:20.751 regex/*: missing internal dependency, "regexdev" 00:01:20.751 ml/*: missing internal dependency, "mldev" 00:01:20.751 vdpa/ifc: not in enabled drivers build config 00:01:20.751 vdpa/mlx5: not in enabled drivers build config 00:01:20.751 vdpa/nfp: not in enabled drivers build config 00:01:20.751 vdpa/sfc: not in enabled drivers build config 00:01:20.751 event/*: missing internal dependency, "eventdev" 00:01:20.751 baseband/*: missing internal dependency, "bbdev" 00:01:20.751 gpu/*: missing internal dependency, "gpudev" 00:01:20.751 00:01:20.751 00:01:21.009 Build targets in project: 85 00:01:21.009 00:01:21.009 DPDK 23.11.0 00:01:21.009 00:01:21.009 User defined options 00:01:21.009 buildtype : debug 00:01:21.009 default_library : shared 00:01:21.009 libdir : lib 00:01:21.009 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.009 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:21.009 c_link_args : 00:01:21.009 cpu_instruction_set: native 00:01:21.009 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:21.009 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:21.009 enable_docs : false 00:01:21.009 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:21.009 enable_kmods : false 00:01:21.009 tests : false 00:01:21.009 00:01:21.009 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:21.585 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:21.585 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:21.585 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:21.585 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:21.585 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:21.585 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:21.585 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:21.585 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:21.585 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:21.585 [9/265] Linking static target lib/librte_kvargs.a 00:01:21.585 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:21.585 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:21.585 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:21.585 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:21.585 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:21.585 [15/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:21.585 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:21.585 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:21.585 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:21.845 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:21.845 [20/265] Linking static target lib/librte_log.a 00:01:21.845 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:21.845 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:21.845 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:21.845 [24/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:21.845 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:21.845 [26/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:21.845 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:21.845 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:21.845 [29/265] Linking static target lib/librte_pci.a 00:01:21.845 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:21.845 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:21.845 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:21.845 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:21.845 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:21.845 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:21.845 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:21.845 [37/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:21.845 [38/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:21.845 [39/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:21.846 [40/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:22.103 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:22.103 [42/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:22.103 [43/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:22.103 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:22.103 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:22.103 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:22.103 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:22.103 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:22.103 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:22.103 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:22.103 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:22.103 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:22.103 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:22.103 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:22.103 [55/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:22.103 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:22.104 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:22.104 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:22.104 [59/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:22.104 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:22.104 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:22.104 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:22.104 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:22.104 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:22.104 [65/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.104 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:22.104 [67/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:22.104 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:22.104 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:22.104 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:22.104 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:22.104 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:22.104 [73/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:22.104 [74/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:22.104 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:22.104 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:22.104 [77/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:22.104 [78/265] Linking static target lib/librte_ring.a 00:01:22.104 [79/265] Linking static target lib/librte_meter.a 00:01:22.104 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:22.104 [81/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:22.104 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:22.104 [83/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.104 [84/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:22.104 [85/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:22.104 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:22.104 [87/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:22.104 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:22.104 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:22.363 [90/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:22.363 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:22.363 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:22.363 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:22.363 [94/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:22.363 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:22.363 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:22.363 [97/265] Linking static target lib/librte_telemetry.a 00:01:22.363 [98/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:22.363 [99/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:22.363 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:22.363 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:22.363 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:22.363 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:22.363 [104/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:22.363 [105/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:22.363 [106/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:22.363 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:22.363 [108/265] Linking static target lib/librte_timer.a 00:01:22.363 [109/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:22.363 [110/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:22.363 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:22.363 [112/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:22.363 [113/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:22.363 [114/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:22.363 [115/265] Linking static target lib/librte_cmdline.a 00:01:22.363 [116/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:22.363 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:22.363 [118/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:22.363 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:22.363 [120/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:22.363 [121/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:22.363 [122/265] Linking static target lib/librte_rcu.a 00:01:22.363 [123/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:22.363 [124/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:22.363 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:22.363 [126/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:22.363 [127/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:22.363 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:22.363 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:22.363 [130/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:22.363 [131/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:22.363 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:22.363 [133/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:22.363 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:22.363 [135/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:22.363 [136/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:22.363 [137/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:22.363 [138/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:22.363 [139/265] Linking static target lib/librte_net.a 00:01:22.363 [140/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:22.363 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:22.363 [142/265] Linking static target lib/librte_mempool.a 00:01:22.363 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:22.363 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:22.363 [145/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:22.363 [146/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:22.363 [147/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:22.363 [148/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:22.363 [149/265] Linking static target lib/librte_dmadev.a 00:01:22.363 [150/265] Linking static target lib/librte_power.a 00:01:22.363 [151/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:22.363 [152/265] Linking static target lib/librte_eal.a 00:01:22.363 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:22.363 [154/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.363 [155/265] Linking static target lib/librte_compressdev.a 00:01:22.363 [156/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:22.363 [157/265] Linking static target lib/librte_mbuf.a 00:01:22.363 [158/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:22.363 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:22.363 [160/265] Linking static target lib/librte_reorder.a 00:01:22.363 [161/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:22.363 [162/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.624 [163/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:22.624 [164/265] Linking static target lib/librte_security.a 00:01:22.624 [165/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.624 [166/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:22.624 [167/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:22.624 [168/265] Linking target lib/librte_log.so.24.0 00:01:22.624 [169/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.624 [170/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:22.624 [171/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:22.624 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:22.624 [173/265] Linking static target lib/librte_hash.a 00:01:22.624 [174/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:22.624 [175/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:22.624 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.624 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:22.624 [178/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:22.624 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:22.624 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:22.624 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:22.624 [182/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:22.624 [183/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.624 [184/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:22.624 [185/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:22.624 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:22.624 [187/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.624 [188/265] Linking target lib/librte_kvargs.so.24.0 00:01:22.624 [189/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:22.624 [190/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.885 [191/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.885 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:22.885 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.885 [194/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:22.885 [195/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.885 [196/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.885 [197/265] Linking static target lib/librte_cryptodev.a 00:01:22.885 [198/265] Linking static target drivers/librte_bus_vdev.a 00:01:22.885 [199/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.885 [200/265] Linking target lib/librte_telemetry.so.24.0 00:01:22.885 [201/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:22.885 [202/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:22.885 [203/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:22.885 [204/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.885 [205/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.885 [206/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.885 [207/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.885 [208/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.885 [209/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.885 [210/265] Linking static target drivers/librte_mempool_ring.a 00:01:22.885 [211/265] Linking static target drivers/librte_bus_pci.a 00:01:22.885 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:23.144 [213/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.144 [214/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:23.144 [215/265] Linking static target lib/librte_ethdev.a 00:01:23.144 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.144 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.402 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.402 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:23.402 [220/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.402 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.402 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.661 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.661 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.227 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:24.227 [226/265] Linking static target lib/librte_vhost.a 00:01:25.161 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.581 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.138 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.040 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.296 [231/265] Linking target lib/librte_eal.so.24.0 00:01:35.296 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:35.296 [233/265] Linking target lib/librte_meter.so.24.0 00:01:35.296 [234/265] Linking target lib/librte_pci.so.24.0 00:01:35.296 [235/265] Linking target lib/librte_ring.so.24.0 00:01:35.296 [236/265] Linking target lib/librte_timer.so.24.0 00:01:35.296 [237/265] Linking target lib/librte_dmadev.so.24.0 00:01:35.296 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:35.554 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:35.554 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:35.554 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:35.554 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:35.554 [243/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:35.554 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:35.554 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:35.554 [246/265] Linking target lib/librte_rcu.so.24.0 00:01:35.812 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:35.812 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:35.812 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:35.812 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:35.812 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:36.070 [252/265] Linking target lib/librte_net.so.24.0 00:01:36.070 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:36.070 [254/265] Linking target lib/librte_reorder.so.24.0 00:01:36.070 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:36.070 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:36.070 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:36.070 [258/265] Linking target lib/librte_hash.so.24.0 00:01:36.070 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:36.070 [260/265] Linking target lib/librte_security.so.24.0 00:01:36.070 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:36.328 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:36.328 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:36.328 [264/265] Linking target lib/librte_power.so.24.0 00:01:36.328 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:36.328 INFO: autodetecting backend as ninja 00:01:36.328 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:37.702 CC lib/ut_mock/mock.o 00:01:37.702 CC lib/log/log.o 00:01:37.702 CC lib/log/log_deprecated.o 00:01:37.702 CC lib/log/log_flags.o 00:01:37.702 CC lib/ut/ut.o 00:01:37.702 LIB libspdk_ut_mock.a 00:01:37.702 SO libspdk_ut_mock.so.6.0 00:01:37.702 LIB libspdk_log.a 00:01:37.702 LIB libspdk_ut.a 00:01:37.702 SO libspdk_log.so.7.0 00:01:37.702 SO libspdk_ut.so.2.0 00:01:37.702 SYMLINK libspdk_ut_mock.so 00:01:37.702 SYMLINK libspdk_log.so 00:01:37.702 SYMLINK libspdk_ut.so 00:01:37.959 CC lib/ioat/ioat.o 00:01:37.959 CC lib/dma/dma.o 00:01:37.959 CC lib/util/cpuset.o 00:01:37.959 CC lib/util/base64.o 00:01:37.959 CC lib/util/bit_array.o 00:01:37.959 CC lib/util/crc16.o 00:01:37.959 CC lib/util/crc32.o 00:01:37.959 CC lib/util/crc32c.o 00:01:38.217 CXX lib/trace_parser/trace.o 00:01:38.217 CC lib/util/dif.o 00:01:38.217 CC lib/util/crc32_ieee.o 00:01:38.217 CC lib/util/crc64.o 00:01:38.217 CC lib/util/file.o 00:01:38.217 CC lib/util/fd.o 00:01:38.217 CC lib/util/hexlify.o 00:01:38.217 CC lib/util/iov.o 00:01:38.217 CC lib/util/math.o 00:01:38.217 CC lib/util/pipe.o 00:01:38.217 CC lib/util/strerror_tls.o 00:01:38.217 CC lib/util/string.o 00:01:38.217 CC lib/util/uuid.o 00:01:38.217 CC lib/util/fd_group.o 00:01:38.217 CC lib/util/xor.o 00:01:38.217 CC lib/util/zipf.o 00:01:38.217 CC lib/vfio_user/host/vfio_user_pci.o 00:01:38.217 CC lib/vfio_user/host/vfio_user.o 00:01:38.217 LIB libspdk_dma.a 00:01:38.217 SO libspdk_dma.so.4.0 00:01:38.217 LIB libspdk_ioat.a 00:01:38.475 SYMLINK libspdk_dma.so 00:01:38.475 SO libspdk_ioat.so.7.0 00:01:38.475 SYMLINK libspdk_ioat.so 00:01:38.475 LIB libspdk_vfio_user.a 00:01:38.475 SO libspdk_vfio_user.so.5.0 00:01:38.475 LIB libspdk_util.a 00:01:38.475 SYMLINK libspdk_vfio_user.so 00:01:38.475 SO libspdk_util.so.9.0 00:01:38.732 SYMLINK libspdk_util.so 00:01:38.732 LIB libspdk_trace_parser.a 00:01:38.732 SO libspdk_trace_parser.so.5.0 00:01:38.991 SYMLINK libspdk_trace_parser.so 00:01:38.991 CC lib/conf/conf.o 00:01:38.991 CC lib/idxd/idxd_user.o 00:01:38.991 CC lib/rdma/common.o 00:01:38.991 CC lib/idxd/idxd.o 00:01:38.991 CC lib/rdma/rdma_verbs.o 00:01:38.991 CC lib/json/json_parse.o 00:01:38.991 CC lib/json/json_util.o 00:01:38.991 CC lib/json/json_write.o 00:01:38.991 CC lib/vmd/led.o 00:01:38.991 CC lib/vmd/vmd.o 00:01:38.991 CC lib/env_dpdk/env.o 00:01:38.991 CC lib/env_dpdk/pci.o 00:01:38.991 CC lib/env_dpdk/memory.o 00:01:38.991 CC lib/env_dpdk/threads.o 00:01:38.991 CC lib/env_dpdk/init.o 00:01:38.991 CC lib/env_dpdk/pci_virtio.o 00:01:38.991 CC lib/env_dpdk/pci_ioat.o 00:01:38.991 CC lib/env_dpdk/pci_vmd.o 00:01:38.991 CC lib/env_dpdk/pci_event.o 00:01:38.991 CC lib/env_dpdk/pci_idxd.o 00:01:38.991 CC lib/env_dpdk/sigbus_handler.o 00:01:38.991 CC lib/env_dpdk/pci_dpdk.o 00:01:38.991 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:38.991 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:39.249 LIB libspdk_conf.a 00:01:39.249 LIB libspdk_rdma.a 00:01:39.249 SO libspdk_conf.so.6.0 00:01:39.249 SO libspdk_rdma.so.6.0 00:01:39.249 LIB libspdk_json.a 00:01:39.507 SYMLINK libspdk_conf.so 00:01:39.507 SO libspdk_json.so.6.0 00:01:39.507 SYMLINK libspdk_rdma.so 00:01:39.507 SYMLINK libspdk_json.so 00:01:39.507 LIB libspdk_idxd.a 00:01:39.507 SO libspdk_idxd.so.12.0 00:01:39.507 LIB libspdk_vmd.a 00:01:39.507 SYMLINK libspdk_idxd.so 00:01:39.765 SO libspdk_vmd.so.6.0 00:01:39.765 SYMLINK libspdk_vmd.so 00:01:39.765 CC lib/jsonrpc/jsonrpc_server.o 00:01:39.765 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:39.765 CC lib/jsonrpc/jsonrpc_client.o 00:01:39.765 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:40.023 LIB libspdk_jsonrpc.a 00:01:40.023 SO libspdk_jsonrpc.so.6.0 00:01:40.023 LIB libspdk_env_dpdk.a 00:01:40.281 SYMLINK libspdk_jsonrpc.so 00:01:40.281 SO libspdk_env_dpdk.so.14.0 00:01:40.281 SYMLINK libspdk_env_dpdk.so 00:01:40.539 CC lib/rpc/rpc.o 00:01:40.797 LIB libspdk_rpc.a 00:01:40.797 SO libspdk_rpc.so.6.0 00:01:40.797 SYMLINK libspdk_rpc.so 00:01:41.055 CC lib/trace/trace.o 00:01:41.055 CC lib/trace/trace_flags.o 00:01:41.055 CC lib/trace/trace_rpc.o 00:01:41.055 CC lib/keyring/keyring.o 00:01:41.055 CC lib/keyring/keyring_rpc.o 00:01:41.055 CC lib/notify/notify.o 00:01:41.055 CC lib/notify/notify_rpc.o 00:01:41.312 LIB libspdk_notify.a 00:01:41.312 LIB libspdk_trace.a 00:01:41.312 LIB libspdk_keyring.a 00:01:41.312 SO libspdk_trace.so.10.0 00:01:41.312 SO libspdk_notify.so.6.0 00:01:41.313 SO libspdk_keyring.so.1.0 00:01:41.313 SYMLINK libspdk_notify.so 00:01:41.570 SYMLINK libspdk_trace.so 00:01:41.570 SYMLINK libspdk_keyring.so 00:01:41.827 CC lib/thread/thread.o 00:01:41.827 CC lib/thread/iobuf.o 00:01:41.827 CC lib/sock/sock.o 00:01:41.827 CC lib/sock/sock_rpc.o 00:01:42.084 LIB libspdk_sock.a 00:01:42.084 SO libspdk_sock.so.9.0 00:01:42.341 SYMLINK libspdk_sock.so 00:01:42.599 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:42.599 CC lib/nvme/nvme_ctrlr.o 00:01:42.599 CC lib/nvme/nvme_fabric.o 00:01:42.599 CC lib/nvme/nvme_ns_cmd.o 00:01:42.599 CC lib/nvme/nvme_ns.o 00:01:42.599 CC lib/nvme/nvme_qpair.o 00:01:42.599 CC lib/nvme/nvme_pcie_common.o 00:01:42.599 CC lib/nvme/nvme_pcie.o 00:01:42.599 CC lib/nvme/nvme.o 00:01:42.599 CC lib/nvme/nvme_transport.o 00:01:42.599 CC lib/nvme/nvme_quirks.o 00:01:42.599 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:42.599 CC lib/nvme/nvme_discovery.o 00:01:42.599 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:42.599 CC lib/nvme/nvme_tcp.o 00:01:42.599 CC lib/nvme/nvme_opal.o 00:01:42.599 CC lib/nvme/nvme_io_msg.o 00:01:42.599 CC lib/nvme/nvme_poll_group.o 00:01:42.599 CC lib/nvme/nvme_zns.o 00:01:42.599 CC lib/nvme/nvme_stubs.o 00:01:42.599 CC lib/nvme/nvme_auth.o 00:01:42.599 CC lib/nvme/nvme_cuse.o 00:01:42.599 CC lib/nvme/nvme_vfio_user.o 00:01:42.599 CC lib/nvme/nvme_rdma.o 00:01:42.857 LIB libspdk_thread.a 00:01:42.858 SO libspdk_thread.so.10.0 00:01:42.858 SYMLINK libspdk_thread.so 00:01:43.115 CC lib/init/subsystem_rpc.o 00:01:43.115 CC lib/init/json_config.o 00:01:43.115 CC lib/init/rpc.o 00:01:43.115 CC lib/init/subsystem.o 00:01:43.404 CC lib/vfu_tgt/tgt_endpoint.o 00:01:43.404 CC lib/vfu_tgt/tgt_rpc.o 00:01:43.404 CC lib/accel/accel.o 00:01:43.404 CC lib/accel/accel_rpc.o 00:01:43.404 CC lib/accel/accel_sw.o 00:01:43.404 CC lib/virtio/virtio.o 00:01:43.404 CC lib/virtio/virtio_pci.o 00:01:43.404 CC lib/virtio/virtio_vhost_user.o 00:01:43.404 CC lib/virtio/virtio_vfio_user.o 00:01:43.404 CC lib/blob/blobstore.o 00:01:43.404 CC lib/blob/zeroes.o 00:01:43.404 CC lib/blob/request.o 00:01:43.404 CC lib/blob/blob_bs_dev.o 00:01:43.404 LIB libspdk_init.a 00:01:43.404 SO libspdk_init.so.5.0 00:01:43.404 LIB libspdk_vfu_tgt.a 00:01:43.661 LIB libspdk_virtio.a 00:01:43.661 SYMLINK libspdk_init.so 00:01:43.661 SO libspdk_vfu_tgt.so.3.0 00:01:43.661 SO libspdk_virtio.so.7.0 00:01:43.661 SYMLINK libspdk_vfu_tgt.so 00:01:43.661 SYMLINK libspdk_virtio.so 00:01:43.918 CC lib/event/app.o 00:01:43.918 CC lib/event/log_rpc.o 00:01:43.918 CC lib/event/reactor.o 00:01:43.918 CC lib/event/app_rpc.o 00:01:43.918 CC lib/event/scheduler_static.o 00:01:43.918 LIB libspdk_accel.a 00:01:43.918 SO libspdk_accel.so.15.0 00:01:44.175 LIB libspdk_nvme.a 00:01:44.175 SYMLINK libspdk_accel.so 00:01:44.175 LIB libspdk_event.a 00:01:44.175 SO libspdk_nvme.so.13.0 00:01:44.175 SO libspdk_event.so.13.0 00:01:44.432 SYMLINK libspdk_event.so 00:01:44.432 CC lib/bdev/bdev.o 00:01:44.432 CC lib/bdev/bdev_rpc.o 00:01:44.432 CC lib/bdev/bdev_zone.o 00:01:44.432 CC lib/bdev/part.o 00:01:44.432 CC lib/bdev/scsi_nvme.o 00:01:44.432 SYMLINK libspdk_nvme.so 00:01:45.371 LIB libspdk_blob.a 00:01:45.371 SO libspdk_blob.so.11.0 00:01:45.371 SYMLINK libspdk_blob.so 00:01:45.936 CC lib/blobfs/blobfs.o 00:01:45.936 CC lib/blobfs/tree.o 00:01:45.936 CC lib/lvol/lvol.o 00:01:46.193 LIB libspdk_bdev.a 00:01:46.193 SO libspdk_bdev.so.15.0 00:01:46.451 SYMLINK libspdk_bdev.so 00:01:46.451 LIB libspdk_blobfs.a 00:01:46.451 SO libspdk_blobfs.so.10.0 00:01:46.451 LIB libspdk_lvol.a 00:01:46.451 SYMLINK libspdk_blobfs.so 00:01:46.451 SO libspdk_lvol.so.10.0 00:01:46.711 SYMLINK libspdk_lvol.so 00:01:46.712 CC lib/scsi/dev.o 00:01:46.712 CC lib/scsi/lun.o 00:01:46.712 CC lib/scsi/port.o 00:01:46.712 CC lib/scsi/scsi.o 00:01:46.712 CC lib/ftl/ftl_init.o 00:01:46.712 CC lib/scsi/scsi_bdev.o 00:01:46.712 CC lib/scsi/scsi_pr.o 00:01:46.712 CC lib/scsi/scsi_rpc.o 00:01:46.712 CC lib/ftl/ftl_core.o 00:01:46.712 CC lib/ftl/ftl_debug.o 00:01:46.712 CC lib/ftl/ftl_layout.o 00:01:46.712 CC lib/ftl/ftl_io.o 00:01:46.712 CC lib/scsi/task.o 00:01:46.712 CC lib/ftl/ftl_sb.o 00:01:46.712 CC lib/ftl/ftl_l2p.o 00:01:46.712 CC lib/ftl/ftl_l2p_flat.o 00:01:46.712 CC lib/nvmf/ctrlr_discovery.o 00:01:46.712 CC lib/ftl/ftl_nv_cache.o 00:01:46.712 CC lib/nvmf/ctrlr.o 00:01:46.712 CC lib/ftl/ftl_band.o 00:01:46.712 CC lib/ftl/ftl_band_ops.o 00:01:46.712 CC lib/nvmf/subsystem.o 00:01:46.712 CC lib/nvmf/ctrlr_bdev.o 00:01:46.712 CC lib/ublk/ublk.o 00:01:46.712 CC lib/ftl/ftl_writer.o 00:01:46.712 CC lib/ftl/ftl_rq.o 00:01:46.712 CC lib/nvmf/nvmf.o 00:01:46.712 CC lib/ftl/ftl_reloc.o 00:01:46.712 CC lib/ublk/ublk_rpc.o 00:01:46.712 CC lib/nvmf/transport.o 00:01:46.712 CC lib/nvmf/nvmf_rpc.o 00:01:46.712 CC lib/ftl/ftl_l2p_cache.o 00:01:46.712 CC lib/nvmf/tcp.o 00:01:46.712 CC lib/ftl/ftl_p2l.o 00:01:46.712 CC lib/nvmf/stubs.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:46.712 CC lib/nvmf/mdns_server.o 00:01:46.712 CC lib/nbd/nbd.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:46.712 CC lib/nvmf/vfio_user.o 00:01:46.712 CC lib/nbd/nbd_rpc.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:46.712 CC lib/nvmf/rdma.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:46.712 CC lib/nvmf/auth.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:46.712 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:46.712 CC lib/ftl/utils/ftl_conf.o 00:01:46.712 CC lib/ftl/utils/ftl_md.o 00:01:46.712 CC lib/ftl/utils/ftl_mempool.o 00:01:46.712 CC lib/ftl/utils/ftl_bitmap.o 00:01:46.712 CC lib/ftl/utils/ftl_property.o 00:01:46.712 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:46.712 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:46.712 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:46.712 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:46.712 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:46.712 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:46.712 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:46.712 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:46.712 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:46.712 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:46.712 CC lib/ftl/base/ftl_base_dev.o 00:01:46.712 CC lib/ftl/base/ftl_base_bdev.o 00:01:46.712 CC lib/ftl/ftl_trace.o 00:01:47.279 LIB libspdk_nbd.a 00:01:47.279 SO libspdk_nbd.so.7.0 00:01:47.279 SYMLINK libspdk_nbd.so 00:01:47.279 LIB libspdk_ublk.a 00:01:47.279 LIB libspdk_scsi.a 00:01:47.279 SO libspdk_ublk.so.3.0 00:01:47.538 SO libspdk_scsi.so.9.0 00:01:47.538 SYMLINK libspdk_ublk.so 00:01:47.538 SYMLINK libspdk_scsi.so 00:01:47.795 LIB libspdk_ftl.a 00:01:47.795 SO libspdk_ftl.so.9.0 00:01:47.795 CC lib/vhost/vhost.o 00:01:47.795 CC lib/vhost/vhost_rpc.o 00:01:47.795 CC lib/vhost/vhost_blk.o 00:01:47.795 CC lib/vhost/vhost_scsi.o 00:01:47.795 CC lib/vhost/rte_vhost_user.o 00:01:47.795 CC lib/iscsi/conn.o 00:01:47.795 CC lib/iscsi/md5.o 00:01:47.795 CC lib/iscsi/init_grp.o 00:01:47.795 CC lib/iscsi/iscsi.o 00:01:47.795 CC lib/iscsi/iscsi_subsystem.o 00:01:47.795 CC lib/iscsi/param.o 00:01:47.795 CC lib/iscsi/portal_grp.o 00:01:47.795 CC lib/iscsi/tgt_node.o 00:01:47.795 CC lib/iscsi/iscsi_rpc.o 00:01:47.795 CC lib/iscsi/task.o 00:01:48.052 SYMLINK libspdk_ftl.so 00:01:48.310 LIB libspdk_nvmf.a 00:01:48.310 SO libspdk_nvmf.so.18.0 00:01:48.567 SYMLINK libspdk_nvmf.so 00:01:48.567 LIB libspdk_vhost.a 00:01:48.567 SO libspdk_vhost.so.8.0 00:01:48.824 SYMLINK libspdk_vhost.so 00:01:48.824 LIB libspdk_iscsi.a 00:01:48.824 SO libspdk_iscsi.so.8.0 00:01:49.081 SYMLINK libspdk_iscsi.so 00:01:49.645 CC module/vfu_device/vfu_virtio.o 00:01:49.645 CC module/vfu_device/vfu_virtio_blk.o 00:01:49.645 CC module/vfu_device/vfu_virtio_scsi.o 00:01:49.645 CC module/vfu_device/vfu_virtio_rpc.o 00:01:49.645 CC module/env_dpdk/env_dpdk_rpc.o 00:01:49.645 CC module/keyring/file/keyring.o 00:01:49.645 CC module/keyring/file/keyring_rpc.o 00:01:49.645 CC module/accel/dsa/accel_dsa.o 00:01:49.645 CC module/blob/bdev/blob_bdev.o 00:01:49.645 CC module/accel/dsa/accel_dsa_rpc.o 00:01:49.645 CC module/accel/error/accel_error.o 00:01:49.645 CC module/accel/error/accel_error_rpc.o 00:01:49.645 CC module/accel/iaa/accel_iaa.o 00:01:49.645 CC module/accel/iaa/accel_iaa_rpc.o 00:01:49.645 CC module/sock/posix/posix.o 00:01:49.645 LIB libspdk_env_dpdk_rpc.a 00:01:49.645 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:49.645 CC module/accel/ioat/accel_ioat.o 00:01:49.645 CC module/accel/ioat/accel_ioat_rpc.o 00:01:49.645 CC module/scheduler/gscheduler/gscheduler.o 00:01:49.645 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:49.645 SO libspdk_env_dpdk_rpc.so.6.0 00:01:49.902 SYMLINK libspdk_env_dpdk_rpc.so 00:01:49.902 LIB libspdk_keyring_file.a 00:01:49.902 SO libspdk_keyring_file.so.1.0 00:01:49.902 LIB libspdk_scheduler_gscheduler.a 00:01:49.902 LIB libspdk_scheduler_dpdk_governor.a 00:01:49.902 SO libspdk_scheduler_gscheduler.so.4.0 00:01:49.902 LIB libspdk_accel_error.a 00:01:49.902 LIB libspdk_accel_iaa.a 00:01:49.902 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:49.902 LIB libspdk_accel_ioat.a 00:01:49.902 LIB libspdk_scheduler_dynamic.a 00:01:49.902 SYMLINK libspdk_keyring_file.so 00:01:49.902 SO libspdk_accel_error.so.2.0 00:01:49.902 LIB libspdk_accel_dsa.a 00:01:49.902 SYMLINK libspdk_scheduler_gscheduler.so 00:01:49.902 LIB libspdk_blob_bdev.a 00:01:49.902 SO libspdk_scheduler_dynamic.so.4.0 00:01:49.902 SO libspdk_accel_iaa.so.3.0 00:01:49.902 SO libspdk_accel_dsa.so.5.0 00:01:49.902 SO libspdk_accel_ioat.so.6.0 00:01:49.902 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:49.902 SYMLINK libspdk_accel_error.so 00:01:49.902 SYMLINK libspdk_scheduler_dynamic.so 00:01:49.902 SO libspdk_blob_bdev.so.11.0 00:01:49.902 SYMLINK libspdk_accel_dsa.so 00:01:49.902 SYMLINK libspdk_accel_ioat.so 00:01:49.902 SYMLINK libspdk_accel_iaa.so 00:01:50.159 LIB libspdk_vfu_device.a 00:01:50.159 SYMLINK libspdk_blob_bdev.so 00:01:50.159 SO libspdk_vfu_device.so.3.0 00:01:50.159 SYMLINK libspdk_vfu_device.so 00:01:50.159 LIB libspdk_sock_posix.a 00:01:50.417 SO libspdk_sock_posix.so.6.0 00:01:50.417 SYMLINK libspdk_sock_posix.so 00:01:50.417 CC module/bdev/gpt/gpt.o 00:01:50.417 CC module/bdev/gpt/vbdev_gpt.o 00:01:50.675 CC module/bdev/malloc/bdev_malloc.o 00:01:50.675 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:50.675 CC module/bdev/aio/bdev_aio.o 00:01:50.675 CC module/blobfs/bdev/blobfs_bdev.o 00:01:50.675 CC module/bdev/raid/bdev_raid.o 00:01:50.675 CC module/bdev/aio/bdev_aio_rpc.o 00:01:50.675 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:50.675 CC module/bdev/raid/bdev_raid_rpc.o 00:01:50.675 CC module/bdev/raid/bdev_raid_sb.o 00:01:50.675 CC module/bdev/raid/raid1.o 00:01:50.675 CC module/bdev/raid/raid0.o 00:01:50.675 CC module/bdev/raid/concat.o 00:01:50.675 CC module/bdev/null/bdev_null.o 00:01:50.675 CC module/bdev/split/vbdev_split.o 00:01:50.675 CC module/bdev/null/bdev_null_rpc.o 00:01:50.675 CC module/bdev/split/vbdev_split_rpc.o 00:01:50.675 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:50.675 CC module/bdev/nvme/bdev_nvme.o 00:01:50.675 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:50.675 CC module/bdev/error/vbdev_error.o 00:01:50.675 CC module/bdev/nvme/bdev_mdns_client.o 00:01:50.675 CC module/bdev/nvme/nvme_rpc.o 00:01:50.675 CC module/bdev/error/vbdev_error_rpc.o 00:01:50.675 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:50.675 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:50.675 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:50.675 CC module/bdev/nvme/vbdev_opal.o 00:01:50.675 CC module/bdev/ftl/bdev_ftl.o 00:01:50.675 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:50.675 CC module/bdev/delay/vbdev_delay.o 00:01:50.675 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:50.675 CC module/bdev/lvol/vbdev_lvol.o 00:01:50.675 CC module/bdev/passthru/vbdev_passthru.o 00:01:50.675 CC module/bdev/iscsi/bdev_iscsi.o 00:01:50.675 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:50.675 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:50.675 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:50.675 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:50.675 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:50.675 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:50.675 LIB libspdk_blobfs_bdev.a 00:01:50.932 SO libspdk_blobfs_bdev.so.6.0 00:01:50.932 LIB libspdk_bdev_split.a 00:01:50.932 LIB libspdk_bdev_gpt.a 00:01:50.932 SO libspdk_bdev_split.so.6.0 00:01:50.932 LIB libspdk_bdev_null.a 00:01:50.932 SO libspdk_bdev_gpt.so.6.0 00:01:50.932 SYMLINK libspdk_blobfs_bdev.so 00:01:50.932 LIB libspdk_bdev_error.a 00:01:50.932 LIB libspdk_bdev_ftl.a 00:01:50.932 LIB libspdk_bdev_passthru.a 00:01:50.932 LIB libspdk_bdev_aio.a 00:01:50.932 SO libspdk_bdev_null.so.6.0 00:01:50.932 LIB libspdk_bdev_zone_block.a 00:01:50.932 LIB libspdk_bdev_malloc.a 00:01:50.932 SO libspdk_bdev_error.so.6.0 00:01:50.932 SO libspdk_bdev_ftl.so.6.0 00:01:50.932 SYMLINK libspdk_bdev_split.so 00:01:50.932 SO libspdk_bdev_passthru.so.6.0 00:01:50.932 SO libspdk_bdev_aio.so.6.0 00:01:50.932 LIB libspdk_bdev_iscsi.a 00:01:50.932 LIB libspdk_bdev_delay.a 00:01:50.932 SYMLINK libspdk_bdev_gpt.so 00:01:50.932 SYMLINK libspdk_bdev_null.so 00:01:50.932 SO libspdk_bdev_malloc.so.6.0 00:01:50.932 SO libspdk_bdev_zone_block.so.6.0 00:01:50.932 SO libspdk_bdev_iscsi.so.6.0 00:01:50.932 SO libspdk_bdev_delay.so.6.0 00:01:50.932 SYMLINK libspdk_bdev_passthru.so 00:01:50.932 SYMLINK libspdk_bdev_error.so 00:01:50.932 SYMLINK libspdk_bdev_ftl.so 00:01:50.932 SYMLINK libspdk_bdev_aio.so 00:01:50.932 SYMLINK libspdk_bdev_zone_block.so 00:01:50.932 SYMLINK libspdk_bdev_malloc.so 00:01:51.190 LIB libspdk_bdev_virtio.a 00:01:51.190 LIB libspdk_bdev_lvol.a 00:01:51.190 SYMLINK libspdk_bdev_iscsi.so 00:01:51.190 SYMLINK libspdk_bdev_delay.so 00:01:51.190 SO libspdk_bdev_lvol.so.6.0 00:01:51.190 SO libspdk_bdev_virtio.so.6.0 00:01:51.190 SYMLINK libspdk_bdev_lvol.so 00:01:51.190 SYMLINK libspdk_bdev_virtio.so 00:01:51.448 LIB libspdk_bdev_raid.a 00:01:51.448 SO libspdk_bdev_raid.so.6.0 00:01:51.448 SYMLINK libspdk_bdev_raid.so 00:01:52.378 LIB libspdk_bdev_nvme.a 00:01:52.378 SO libspdk_bdev_nvme.so.7.0 00:01:52.378 SYMLINK libspdk_bdev_nvme.so 00:01:53.314 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:53.314 CC module/event/subsystems/keyring/keyring.o 00:01:53.314 CC module/event/subsystems/vmd/vmd.o 00:01:53.314 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:53.314 CC module/event/subsystems/scheduler/scheduler.o 00:01:53.314 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:53.314 CC module/event/subsystems/sock/sock.o 00:01:53.314 CC module/event/subsystems/iobuf/iobuf.o 00:01:53.314 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:53.314 LIB libspdk_event_scheduler.a 00:01:53.314 LIB libspdk_event_keyring.a 00:01:53.314 LIB libspdk_event_vhost_blk.a 00:01:53.314 LIB libspdk_event_vmd.a 00:01:53.314 LIB libspdk_event_vfu_tgt.a 00:01:53.314 SO libspdk_event_scheduler.so.4.0 00:01:53.314 LIB libspdk_event_sock.a 00:01:53.314 LIB libspdk_event_iobuf.a 00:01:53.314 SO libspdk_event_keyring.so.1.0 00:01:53.314 SO libspdk_event_vfu_tgt.so.3.0 00:01:53.314 SO libspdk_event_vhost_blk.so.3.0 00:01:53.314 SO libspdk_event_vmd.so.6.0 00:01:53.314 SO libspdk_event_sock.so.5.0 00:01:53.314 SYMLINK libspdk_event_scheduler.so 00:01:53.314 SO libspdk_event_iobuf.so.3.0 00:01:53.314 SYMLINK libspdk_event_keyring.so 00:01:53.314 SYMLINK libspdk_event_vfu_tgt.so 00:01:53.314 SYMLINK libspdk_event_vhost_blk.so 00:01:53.314 SYMLINK libspdk_event_vmd.so 00:01:53.314 SYMLINK libspdk_event_sock.so 00:01:53.314 SYMLINK libspdk_event_iobuf.so 00:01:53.881 CC module/event/subsystems/accel/accel.o 00:01:53.881 LIB libspdk_event_accel.a 00:01:53.881 SO libspdk_event_accel.so.6.0 00:01:54.145 SYMLINK libspdk_event_accel.so 00:01:54.403 CC module/event/subsystems/bdev/bdev.o 00:01:54.661 LIB libspdk_event_bdev.a 00:01:54.661 SO libspdk_event_bdev.so.6.0 00:01:54.661 SYMLINK libspdk_event_bdev.so 00:01:54.918 CC module/event/subsystems/nbd/nbd.o 00:01:54.918 CC module/event/subsystems/ublk/ublk.o 00:01:54.918 CC module/event/subsystems/scsi/scsi.o 00:01:54.918 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:54.918 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:55.175 LIB libspdk_event_nbd.a 00:01:55.175 SO libspdk_event_nbd.so.6.0 00:01:55.175 LIB libspdk_event_ublk.a 00:01:55.175 LIB libspdk_event_scsi.a 00:01:55.175 SO libspdk_event_ublk.so.3.0 00:01:55.175 SYMLINK libspdk_event_nbd.so 00:01:55.176 SO libspdk_event_scsi.so.6.0 00:01:55.176 LIB libspdk_event_nvmf.a 00:01:55.176 SYMLINK libspdk_event_ublk.so 00:01:55.176 SYMLINK libspdk_event_scsi.so 00:01:55.176 SO libspdk_event_nvmf.so.6.0 00:01:55.433 SYMLINK libspdk_event_nvmf.so 00:01:55.691 CC module/event/subsystems/iscsi/iscsi.o 00:01:55.691 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:55.691 LIB libspdk_event_vhost_scsi.a 00:01:55.691 LIB libspdk_event_iscsi.a 00:01:55.949 SO libspdk_event_vhost_scsi.so.3.0 00:01:55.949 SO libspdk_event_iscsi.so.6.0 00:01:55.949 SYMLINK libspdk_event_vhost_scsi.so 00:01:55.949 SYMLINK libspdk_event_iscsi.so 00:01:56.206 SO libspdk.so.6.0 00:01:56.206 SYMLINK libspdk.so 00:01:56.465 CC app/spdk_nvme_perf/perf.o 00:01:56.465 CC app/spdk_nvme_identify/identify.o 00:01:56.465 CC app/spdk_nvme_discover/discovery_aer.o 00:01:56.465 CC app/spdk_lspci/spdk_lspci.o 00:01:56.465 CC app/trace_record/trace_record.o 00:01:56.465 CC test/rpc_client/rpc_client_test.o 00:01:56.465 CXX app/trace/trace.o 00:01:56.465 CC app/spdk_top/spdk_top.o 00:01:56.465 TEST_HEADER include/spdk/accel.h 00:01:56.465 TEST_HEADER include/spdk/accel_module.h 00:01:56.465 TEST_HEADER include/spdk/assert.h 00:01:56.465 TEST_HEADER include/spdk/base64.h 00:01:56.465 TEST_HEADER include/spdk/bdev.h 00:01:56.465 TEST_HEADER include/spdk/barrier.h 00:01:56.465 TEST_HEADER include/spdk/bit_array.h 00:01:56.465 TEST_HEADER include/spdk/bdev_zone.h 00:01:56.465 TEST_HEADER include/spdk/bdev_module.h 00:01:56.465 TEST_HEADER include/spdk/bit_pool.h 00:01:56.465 TEST_HEADER include/spdk/blob_bdev.h 00:01:56.465 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:56.465 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:56.465 TEST_HEADER include/spdk/blobfs.h 00:01:56.465 TEST_HEADER include/spdk/conf.h 00:01:56.465 TEST_HEADER include/spdk/blob.h 00:01:56.465 TEST_HEADER include/spdk/config.h 00:01:56.465 TEST_HEADER include/spdk/cpuset.h 00:01:56.465 TEST_HEADER include/spdk/crc32.h 00:01:56.465 TEST_HEADER include/spdk/crc16.h 00:01:56.465 TEST_HEADER include/spdk/crc64.h 00:01:56.465 TEST_HEADER include/spdk/dif.h 00:01:56.465 TEST_HEADER include/spdk/dma.h 00:01:56.465 TEST_HEADER include/spdk/endian.h 00:01:56.465 TEST_HEADER include/spdk/env_dpdk.h 00:01:56.465 TEST_HEADER include/spdk/env.h 00:01:56.465 TEST_HEADER include/spdk/event.h 00:01:56.465 TEST_HEADER include/spdk/fd_group.h 00:01:56.465 TEST_HEADER include/spdk/fd.h 00:01:56.465 TEST_HEADER include/spdk/file.h 00:01:56.465 TEST_HEADER include/spdk/ftl.h 00:01:56.465 TEST_HEADER include/spdk/gpt_spec.h 00:01:56.465 TEST_HEADER include/spdk/hexlify.h 00:01:56.465 TEST_HEADER include/spdk/idxd.h 00:01:56.465 TEST_HEADER include/spdk/histogram_data.h 00:01:56.465 TEST_HEADER include/spdk/idxd_spec.h 00:01:56.465 TEST_HEADER include/spdk/ioat.h 00:01:56.465 TEST_HEADER include/spdk/init.h 00:01:56.465 TEST_HEADER include/spdk/ioat_spec.h 00:01:56.465 TEST_HEADER include/spdk/iscsi_spec.h 00:01:56.465 TEST_HEADER include/spdk/json.h 00:01:56.465 TEST_HEADER include/spdk/jsonrpc.h 00:01:56.465 TEST_HEADER include/spdk/keyring.h 00:01:56.465 TEST_HEADER include/spdk/keyring_module.h 00:01:56.465 CC app/spdk_dd/spdk_dd.o 00:01:56.465 TEST_HEADER include/spdk/likely.h 00:01:56.465 TEST_HEADER include/spdk/log.h 00:01:56.465 TEST_HEADER include/spdk/lvol.h 00:01:56.465 TEST_HEADER include/spdk/memory.h 00:01:56.465 TEST_HEADER include/spdk/mmio.h 00:01:56.465 TEST_HEADER include/spdk/nbd.h 00:01:56.465 TEST_HEADER include/spdk/nvme.h 00:01:56.465 TEST_HEADER include/spdk/notify.h 00:01:56.465 TEST_HEADER include/spdk/nvme_intel.h 00:01:56.465 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:56.465 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:56.465 TEST_HEADER include/spdk/nvme_spec.h 00:01:56.465 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:56.465 TEST_HEADER include/spdk/nvme_zns.h 00:01:56.465 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:56.465 TEST_HEADER include/spdk/nvmf_spec.h 00:01:56.465 TEST_HEADER include/spdk/nvmf.h 00:01:56.465 TEST_HEADER include/spdk/nvmf_transport.h 00:01:56.465 TEST_HEADER include/spdk/opal.h 00:01:56.465 CC app/nvmf_tgt/nvmf_main.o 00:01:56.465 TEST_HEADER include/spdk/opal_spec.h 00:01:56.465 TEST_HEADER include/spdk/pci_ids.h 00:01:56.465 TEST_HEADER include/spdk/pipe.h 00:01:56.465 TEST_HEADER include/spdk/queue.h 00:01:56.465 CC app/iscsi_tgt/iscsi_tgt.o 00:01:56.465 TEST_HEADER include/spdk/reduce.h 00:01:56.465 CC app/spdk_tgt/spdk_tgt.o 00:01:56.465 TEST_HEADER include/spdk/rpc.h 00:01:56.465 TEST_HEADER include/spdk/scheduler.h 00:01:56.465 TEST_HEADER include/spdk/scsi.h 00:01:56.465 TEST_HEADER include/spdk/scsi_spec.h 00:01:56.465 TEST_HEADER include/spdk/stdinc.h 00:01:56.465 TEST_HEADER include/spdk/sock.h 00:01:56.465 TEST_HEADER include/spdk/string.h 00:01:56.465 TEST_HEADER include/spdk/thread.h 00:01:56.465 TEST_HEADER include/spdk/trace.h 00:01:56.465 TEST_HEADER include/spdk/trace_parser.h 00:01:56.465 TEST_HEADER include/spdk/tree.h 00:01:56.465 TEST_HEADER include/spdk/ublk.h 00:01:56.465 TEST_HEADER include/spdk/util.h 00:01:56.465 TEST_HEADER include/spdk/uuid.h 00:01:56.465 CC app/vhost/vhost.o 00:01:56.465 TEST_HEADER include/spdk/version.h 00:01:56.465 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:56.465 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:56.465 TEST_HEADER include/spdk/vhost.h 00:01:56.465 TEST_HEADER include/spdk/vmd.h 00:01:56.465 TEST_HEADER include/spdk/xor.h 00:01:56.465 TEST_HEADER include/spdk/zipf.h 00:01:56.465 CXX test/cpp_headers/accel.o 00:01:56.465 CXX test/cpp_headers/accel_module.o 00:01:56.465 CXX test/cpp_headers/assert.o 00:01:56.465 CXX test/cpp_headers/barrier.o 00:01:56.465 CXX test/cpp_headers/base64.o 00:01:56.465 CXX test/cpp_headers/bdev.o 00:01:56.465 CXX test/cpp_headers/bdev_zone.o 00:01:56.465 CXX test/cpp_headers/bdev_module.o 00:01:56.465 CXX test/cpp_headers/bit_array.o 00:01:56.465 CXX test/cpp_headers/bit_pool.o 00:01:56.465 CXX test/cpp_headers/blob_bdev.o 00:01:56.465 CXX test/cpp_headers/blobfs_bdev.o 00:01:56.465 CXX test/cpp_headers/blobfs.o 00:01:56.760 CXX test/cpp_headers/blob.o 00:01:56.760 CXX test/cpp_headers/conf.o 00:01:56.760 CXX test/cpp_headers/config.o 00:01:56.760 CXX test/cpp_headers/cpuset.o 00:01:56.760 CXX test/cpp_headers/crc16.o 00:01:56.760 CXX test/cpp_headers/crc32.o 00:01:56.760 CXX test/cpp_headers/crc64.o 00:01:56.760 CXX test/cpp_headers/dif.o 00:01:56.760 CXX test/cpp_headers/dma.o 00:01:56.760 CXX test/cpp_headers/endian.o 00:01:56.760 CXX test/cpp_headers/env_dpdk.o 00:01:56.760 CXX test/cpp_headers/env.o 00:01:56.760 CXX test/cpp_headers/event.o 00:01:56.760 CXX test/cpp_headers/fd_group.o 00:01:56.760 CXX test/cpp_headers/fd.o 00:01:56.760 CXX test/cpp_headers/file.o 00:01:56.760 CXX test/cpp_headers/ftl.o 00:01:56.760 CXX test/cpp_headers/gpt_spec.o 00:01:56.760 CXX test/cpp_headers/hexlify.o 00:01:56.760 CXX test/cpp_headers/histogram_data.o 00:01:56.760 CXX test/cpp_headers/idxd_spec.o 00:01:56.760 CXX test/cpp_headers/idxd.o 00:01:56.760 CXX test/cpp_headers/init.o 00:01:56.760 CXX test/cpp_headers/ioat.o 00:01:56.760 CC examples/ioat/verify/verify.o 00:01:56.760 CC examples/vmd/led/led.o 00:01:56.760 CC examples/nvme/arbitration/arbitration.o 00:01:56.760 CC examples/ioat/perf/perf.o 00:01:56.760 CC examples/vmd/lsvmd/lsvmd.o 00:01:56.760 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:56.760 CC examples/nvme/hello_world/hello_world.o 00:01:56.760 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:56.760 CC examples/nvme/reconnect/reconnect.o 00:01:56.760 CXX test/cpp_headers/ioat_spec.o 00:01:56.760 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:56.760 CC examples/idxd/perf/perf.o 00:01:56.760 CC examples/nvme/hotplug/hotplug.o 00:01:56.760 CC test/env/memory/memory_ut.o 00:01:56.760 CC examples/nvme/abort/abort.o 00:01:56.760 CC examples/accel/perf/accel_perf.o 00:01:56.760 CC test/app/stub/stub.o 00:01:56.760 CC test/env/vtophys/vtophys.o 00:01:56.760 CC test/env/pci/pci_ut.o 00:01:56.760 CC test/app/histogram_perf/histogram_perf.o 00:01:56.760 CC examples/sock/hello_world/hello_sock.o 00:01:56.760 CC examples/util/zipf/zipf.o 00:01:56.760 CC test/app/jsoncat/jsoncat.o 00:01:56.760 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:56.760 CC app/fio/nvme/fio_plugin.o 00:01:56.760 CC test/event/reactor_perf/reactor_perf.o 00:01:56.760 CC test/event/reactor/reactor.o 00:01:56.760 CC test/nvme/e2edp/nvme_dp.o 00:01:56.760 CC test/nvme/err_injection/err_injection.o 00:01:56.760 CC test/nvme/aer/aer.o 00:01:56.760 CC test/thread/poller_perf/poller_perf.o 00:01:56.760 CC test/event/event_perf/event_perf.o 00:01:56.760 CC test/nvme/simple_copy/simple_copy.o 00:01:56.760 CC test/nvme/sgl/sgl.o 00:01:56.760 CC examples/blob/cli/blobcli.o 00:01:56.760 CC test/nvme/reserve/reserve.o 00:01:56.760 CC test/nvme/fdp/fdp.o 00:01:56.760 CC examples/bdev/hello_world/hello_bdev.o 00:01:56.760 CC test/blobfs/mkfs/mkfs.o 00:01:56.760 CC test/nvme/reset/reset.o 00:01:56.760 CC test/nvme/boot_partition/boot_partition.o 00:01:56.760 CC test/nvme/connect_stress/connect_stress.o 00:01:56.760 CC test/event/app_repeat/app_repeat.o 00:01:56.760 CC examples/thread/thread/thread_ex.o 00:01:56.760 CC examples/nvmf/nvmf/nvmf.o 00:01:56.760 CC test/nvme/compliance/nvme_compliance.o 00:01:56.760 CC test/nvme/fused_ordering/fused_ordering.o 00:01:56.760 CC test/nvme/overhead/overhead.o 00:01:56.760 CC test/nvme/startup/startup.o 00:01:56.760 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:56.760 CC test/accel/dif/dif.o 00:01:56.760 CC test/nvme/cuse/cuse.o 00:01:56.760 CC app/fio/bdev/fio_plugin.o 00:01:56.761 CC examples/bdev/bdevperf/bdevperf.o 00:01:56.761 CC test/event/scheduler/scheduler.o 00:01:56.761 CC test/app/bdev_svc/bdev_svc.o 00:01:56.761 CC examples/blob/hello_world/hello_blob.o 00:01:56.761 CC test/bdev/bdevio/bdevio.o 00:01:56.761 CC test/dma/test_dma/test_dma.o 00:01:57.037 LINK spdk_lspci 00:01:57.037 LINK rpc_client_test 00:01:57.037 LINK interrupt_tgt 00:01:57.037 LINK spdk_nvme_discover 00:01:57.037 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:57.037 CC test/lvol/esnap/esnap.o 00:01:57.305 CC test/env/mem_callbacks/mem_callbacks.o 00:01:57.305 LINK nvmf_tgt 00:01:57.305 LINK vhost 00:01:57.305 LINK lsvmd 00:01:57.305 LINK vtophys 00:01:57.305 LINK led 00:01:57.305 LINK reactor_perf 00:01:57.305 LINK env_dpdk_post_init 00:01:57.305 LINK histogram_perf 00:01:57.305 LINK jsoncat 00:01:57.305 CXX test/cpp_headers/iscsi_spec.o 00:01:57.305 LINK zipf 00:01:57.305 LINK reactor 00:01:57.305 CXX test/cpp_headers/json.o 00:01:57.305 CXX test/cpp_headers/jsonrpc.o 00:01:57.305 LINK iscsi_tgt 00:01:57.305 CXX test/cpp_headers/keyring.o 00:01:57.305 CXX test/cpp_headers/keyring_module.o 00:01:57.305 LINK poller_perf 00:01:57.305 CXX test/cpp_headers/likely.o 00:01:57.305 LINK spdk_trace_record 00:01:57.305 CXX test/cpp_headers/log.o 00:01:57.305 LINK spdk_tgt 00:01:57.305 CXX test/cpp_headers/lvol.o 00:01:57.305 LINK stub 00:01:57.305 CXX test/cpp_headers/memory.o 00:01:57.305 LINK ioat_perf 00:01:57.305 LINK app_repeat 00:01:57.305 CXX test/cpp_headers/mmio.o 00:01:57.305 LINK event_perf 00:01:57.305 LINK pmr_persistence 00:01:57.305 CXX test/cpp_headers/nbd.o 00:01:57.305 CXX test/cpp_headers/notify.o 00:01:57.305 CXX test/cpp_headers/nvme.o 00:01:57.305 CXX test/cpp_headers/nvme_intel.o 00:01:57.305 CXX test/cpp_headers/nvme_ocssd.o 00:01:57.305 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:57.305 LINK verify 00:01:57.305 CXX test/cpp_headers/nvme_spec.o 00:01:57.305 CXX test/cpp_headers/nvme_zns.o 00:01:57.305 LINK boot_partition 00:01:57.305 CXX test/cpp_headers/nvmf_cmd.o 00:01:57.305 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:57.305 CXX test/cpp_headers/nvmf.o 00:01:57.305 LINK cmb_copy 00:01:57.305 CXX test/cpp_headers/nvmf_spec.o 00:01:57.305 CXX test/cpp_headers/opal_spec.o 00:01:57.305 CXX test/cpp_headers/nvmf_transport.o 00:01:57.305 CXX test/cpp_headers/opal.o 00:01:57.305 CXX test/cpp_headers/pci_ids.o 00:01:57.305 LINK err_injection 00:01:57.305 CXX test/cpp_headers/pipe.o 00:01:57.305 LINK startup 00:01:57.305 LINK fused_ordering 00:01:57.305 LINK hello_world 00:01:57.305 CXX test/cpp_headers/queue.o 00:01:57.305 CXX test/cpp_headers/reduce.o 00:01:57.305 LINK doorbell_aers 00:01:57.305 LINK connect_stress 00:01:57.305 CXX test/cpp_headers/rpc.o 00:01:57.305 LINK bdev_svc 00:01:57.305 CXX test/cpp_headers/scheduler.o 00:01:57.305 LINK hello_sock 00:01:57.305 CXX test/cpp_headers/scsi.o 00:01:57.305 CXX test/cpp_headers/sock.o 00:01:57.305 CXX test/cpp_headers/scsi_spec.o 00:01:57.574 CXX test/cpp_headers/stdinc.o 00:01:57.574 LINK hotplug 00:01:57.574 CXX test/cpp_headers/string.o 00:01:57.574 LINK mkfs 00:01:57.574 LINK scheduler 00:01:57.574 LINK reserve 00:01:57.574 LINK simple_copy 00:01:57.574 CXX test/cpp_headers/trace.o 00:01:57.574 CXX test/cpp_headers/thread.o 00:01:57.574 CXX test/cpp_headers/trace_parser.o 00:01:57.574 LINK hello_bdev 00:01:57.574 LINK thread 00:01:57.574 LINK sgl 00:01:57.574 CXX test/cpp_headers/tree.o 00:01:57.574 LINK nvme_dp 00:01:57.574 LINK hello_blob 00:01:57.574 LINK aer 00:01:57.574 LINK reset 00:01:57.574 LINK arbitration 00:01:57.574 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:57.574 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:57.574 CXX test/cpp_headers/ublk.o 00:01:57.574 LINK reconnect 00:01:57.574 LINK idxd_perf 00:01:57.574 LINK nvme_compliance 00:01:57.574 LINK spdk_dd 00:01:57.574 LINK fdp 00:01:57.574 LINK nvmf 00:01:57.574 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:57.574 CXX test/cpp_headers/util.o 00:01:57.574 CXX test/cpp_headers/uuid.o 00:01:57.574 LINK pci_ut 00:01:57.574 CXX test/cpp_headers/version.o 00:01:57.574 CXX test/cpp_headers/vfio_user_pci.o 00:01:57.574 LINK overhead 00:01:57.574 CXX test/cpp_headers/vfio_user_spec.o 00:01:57.574 CXX test/cpp_headers/vhost.o 00:01:57.574 LINK abort 00:01:57.574 CXX test/cpp_headers/vmd.o 00:01:57.574 CXX test/cpp_headers/xor.o 00:01:57.834 CXX test/cpp_headers/zipf.o 00:01:57.834 LINK dif 00:01:57.834 LINK spdk_trace 00:01:57.834 LINK test_dma 00:01:57.834 LINK accel_perf 00:01:57.834 LINK bdevio 00:01:57.834 LINK nvme_manage 00:01:57.834 LINK blobcli 00:01:57.834 LINK spdk_nvme 00:01:57.834 LINK spdk_bdev 00:01:58.092 LINK nvme_fuzz 00:01:58.092 LINK mem_callbacks 00:01:58.092 LINK spdk_nvme_perf 00:01:58.092 LINK spdk_top 00:01:58.092 LINK spdk_nvme_identify 00:01:58.092 LINK memory_ut 00:01:58.350 LINK bdevperf 00:01:58.350 LINK vhost_fuzz 00:01:58.350 LINK cuse 00:01:58.916 LINK iscsi_fuzz 00:02:00.814 LINK esnap 00:02:01.072 00:02:01.072 real 0m48.375s 00:02:01.072 user 6m35.239s 00:02:01.072 sys 4m20.409s 00:02:01.072 15:39:59 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:01.072 15:39:59 make -- common/autotest_common.sh@10 -- $ set +x 00:02:01.072 ************************************ 00:02:01.072 END TEST make 00:02:01.072 ************************************ 00:02:01.072 15:39:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:01.072 15:39:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:01.072 15:39:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:01.072 15:39:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.072 15:39:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:01.072 15:39:59 -- pm/common@44 -- $ pid=3449300 00:02:01.072 15:39:59 -- pm/common@50 -- $ kill -TERM 3449300 00:02:01.072 15:39:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.072 15:39:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:01.072 15:39:59 -- pm/common@44 -- $ pid=3449302 00:02:01.072 15:39:59 -- pm/common@50 -- $ kill -TERM 3449302 00:02:01.072 15:39:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.072 15:39:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:01.072 15:39:59 -- pm/common@44 -- $ pid=3449304 00:02:01.072 15:39:59 -- pm/common@50 -- $ kill -TERM 3449304 00:02:01.073 15:39:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.073 15:39:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:01.073 15:39:59 -- pm/common@44 -- $ pid=3449334 00:02:01.073 15:39:59 -- pm/common@50 -- $ sudo -E kill -TERM 3449334 00:02:01.073 15:39:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:01.073 15:39:59 -- nvmf/common.sh@7 -- # uname -s 00:02:01.073 15:39:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:01.073 15:39:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:01.073 15:39:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:01.073 15:39:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:01.073 15:39:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:01.073 15:39:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:01.073 15:39:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:01.073 15:39:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:01.073 15:39:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:01.073 15:39:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:01.073 15:39:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:01.073 15:39:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:01.073 15:39:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:01.073 15:39:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:01.073 15:39:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:01.073 15:39:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:01.073 15:39:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:01.073 15:39:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:01.073 15:39:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.073 15:39:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.073 15:39:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.073 15:39:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.073 15:39:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.073 15:39:59 -- paths/export.sh@5 -- # export PATH 00:02:01.073 15:39:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.073 15:39:59 -- nvmf/common.sh@47 -- # : 0 00:02:01.073 15:39:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:01.073 15:39:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:01.073 15:39:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:01.073 15:39:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:01.073 15:39:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:01.073 15:39:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:01.073 15:39:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:01.073 15:39:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:01.073 15:39:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:01.073 15:39:59 -- spdk/autotest.sh@32 -- # uname -s 00:02:01.073 15:39:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:01.073 15:39:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:01.073 15:39:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:01.330 15:39:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:01.330 15:39:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:01.330 15:39:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:01.330 15:39:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:01.330 15:39:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:01.330 15:39:59 -- spdk/autotest.sh@48 -- # udevadm_pid=3510116 00:02:01.330 15:39:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:01.330 15:39:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:01.331 15:39:59 -- pm/common@17 -- # local monitor 00:02:01.331 15:39:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.331 15:39:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.331 15:39:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.331 15:39:59 -- pm/common@21 -- # date +%s 00:02:01.331 15:39:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.331 15:39:59 -- pm/common@21 -- # date +%s 00:02:01.331 15:39:59 -- pm/common@25 -- # sleep 1 00:02:01.331 15:39:59 -- pm/common@21 -- # date +%s 00:02:01.331 15:39:59 -- pm/common@21 -- # date +%s 00:02:01.331 15:39:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715780399 00:02:01.331 15:39:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715780399 00:02:01.331 15:39:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715780399 00:02:01.331 15:39:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715780399 00:02:01.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715780399_collect-vmstat.pm.log 00:02:01.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715780399_collect-cpu-load.pm.log 00:02:01.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715780399_collect-cpu-temp.pm.log 00:02:01.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715780399_collect-bmc-pm.bmc.pm.log 00:02:02.264 15:40:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:02.264 15:40:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:02.264 15:40:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:02.264 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:02:02.264 15:40:00 -- spdk/autotest.sh@59 -- # create_test_list 00:02:02.264 15:40:00 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:02.264 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:02:02.264 15:40:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:02.264 15:40:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.264 15:40:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.264 15:40:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:02.264 15:40:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.264 15:40:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:02.264 15:40:00 -- common/autotest_common.sh@1451 -- # uname 00:02:02.264 15:40:00 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:02.264 15:40:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:02.264 15:40:00 -- common/autotest_common.sh@1471 -- # uname 00:02:02.264 15:40:00 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:02.264 15:40:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:02.264 15:40:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:02.264 15:40:00 -- spdk/autotest.sh@72 -- # hash lcov 00:02:02.264 15:40:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:02.264 15:40:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:02.264 --rc lcov_branch_coverage=1 00:02:02.264 --rc lcov_function_coverage=1 00:02:02.264 --rc genhtml_branch_coverage=1 00:02:02.264 --rc genhtml_function_coverage=1 00:02:02.264 --rc genhtml_legend=1 00:02:02.264 --rc geninfo_all_blocks=1 00:02:02.264 ' 00:02:02.264 15:40:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:02.264 --rc lcov_branch_coverage=1 00:02:02.264 --rc lcov_function_coverage=1 00:02:02.264 --rc genhtml_branch_coverage=1 00:02:02.264 --rc genhtml_function_coverage=1 00:02:02.264 --rc genhtml_legend=1 00:02:02.264 --rc geninfo_all_blocks=1 00:02:02.265 ' 00:02:02.265 15:40:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:02.265 --rc lcov_branch_coverage=1 00:02:02.265 --rc lcov_function_coverage=1 00:02:02.265 --rc genhtml_branch_coverage=1 00:02:02.265 --rc genhtml_function_coverage=1 00:02:02.265 --rc genhtml_legend=1 00:02:02.265 --rc geninfo_all_blocks=1 00:02:02.265 --no-external' 00:02:02.265 15:40:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:02.265 --rc lcov_branch_coverage=1 00:02:02.265 --rc lcov_function_coverage=1 00:02:02.265 --rc genhtml_branch_coverage=1 00:02:02.265 --rc genhtml_function_coverage=1 00:02:02.265 --rc genhtml_legend=1 00:02:02.265 --rc geninfo_all_blocks=1 00:02:02.265 --no-external' 00:02:02.265 15:40:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:02.265 lcov: LCOV version 1.14 00:02:02.265 15:40:00 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:12.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:12.229 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:12.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:12.487 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:12.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:12.487 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:12.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:12.487 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:24.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:24.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:24.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:24.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:24.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:24.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:24.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:24.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:24.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:25.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:25.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:26.610 15:40:24 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:26.610 15:40:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:26.610 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:02:26.610 15:40:24 -- spdk/autotest.sh@91 -- # rm -f 00:02:26.610 15:40:24 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.894 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:29.894 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:29.894 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:29.894 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:29.894 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.152 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.411 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.411 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.411 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:30.411 15:40:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:30.411 15:40:28 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:30.411 15:40:28 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:30.411 15:40:28 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:30.411 15:40:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:30.411 15:40:28 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:30.411 15:40:28 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:30.411 15:40:28 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:30.411 15:40:28 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:30.411 15:40:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:30.411 15:40:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:30.411 15:40:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:30.411 15:40:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:30.411 15:40:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:30.411 15:40:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:30.411 No valid GPT data, bailing 00:02:30.411 15:40:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:30.411 15:40:28 -- scripts/common.sh@391 -- # pt= 00:02:30.411 15:40:28 -- scripts/common.sh@392 -- # return 1 00:02:30.411 15:40:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:30.411 1+0 records in 00:02:30.411 1+0 records out 00:02:30.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00188867 s, 555 MB/s 00:02:30.411 15:40:28 -- spdk/autotest.sh@118 -- # sync 00:02:30.411 15:40:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:30.411 15:40:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:30.411 15:40:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:38.520 15:40:35 -- spdk/autotest.sh@124 -- # uname -s 00:02:38.520 15:40:35 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:38.520 15:40:35 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.520 15:40:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:38.520 15:40:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:38.520 15:40:35 -- common/autotest_common.sh@10 -- # set +x 00:02:38.520 ************************************ 00:02:38.520 START TEST setup.sh 00:02:38.520 ************************************ 00:02:38.520 15:40:35 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.520 * Looking for test storage... 00:02:38.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:38.520 15:40:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:38.520 15:40:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:38.520 15:40:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:38.520 15:40:36 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:38.520 15:40:36 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:38.520 15:40:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:38.520 ************************************ 00:02:38.520 START TEST acl 00:02:38.520 ************************************ 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:38.520 * Looking for test storage... 00:02:38.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:38.520 15:40:36 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.520 15:40:36 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:38.520 15:40:36 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:38.520 15:40:36 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:38.520 15:40:36 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:38.520 15:40:36 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:38.520 15:40:36 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:38.520 15:40:36 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.520 15:40:36 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.804 15:40:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:41.804 15:40:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:41.804 15:40:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.804 15:40:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:41.804 15:40:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.804 15:40:39 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:45.090 Hugepages 00:02:45.090 node hugesize free / total 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 00:02:45.090 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:45.090 15:40:43 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:45.090 15:40:43 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:45.090 15:40:43 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:45.090 15:40:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:45.090 ************************************ 00:02:45.090 START TEST denied 00:02:45.090 ************************************ 00:02:45.090 15:40:43 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:45.090 15:40:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:45.090 15:40:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:45.090 15:40:43 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:45.090 15:40:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.090 15:40:43 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:48.370 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.370 15:40:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.588 00:02:52.588 real 0m7.350s 00:02:52.588 user 0m2.139s 00:02:52.588 sys 0m4.366s 00:02:52.589 15:40:50 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:52.589 15:40:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:52.589 ************************************ 00:02:52.589 END TEST denied 00:02:52.589 ************************************ 00:02:52.589 15:40:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:52.589 15:40:50 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:52.589 15:40:50 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:52.589 15:40:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:52.589 ************************************ 00:02:52.589 START TEST allowed 00:02:52.589 ************************************ 00:02:52.589 15:40:50 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:52.589 15:40:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:52.589 15:40:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:52.589 15:40:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:52.589 15:40:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.589 15:40:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.852 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.852 15:40:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:57.852 15:40:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:57.852 15:40:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:57.852 15:40:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.852 15:40:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.138 00:03:01.138 real 0m8.428s 00:03:01.138 user 0m2.373s 00:03:01.138 sys 0m4.599s 00:03:01.138 15:40:59 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:01.138 15:40:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:01.138 ************************************ 00:03:01.138 END TEST allowed 00:03:01.138 ************************************ 00:03:01.138 00:03:01.138 real 0m23.287s 00:03:01.138 user 0m7.230s 00:03:01.138 sys 0m14.005s 00:03:01.138 15:40:59 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:01.138 15:40:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:01.138 ************************************ 00:03:01.138 END TEST acl 00:03:01.138 ************************************ 00:03:01.138 15:40:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.138 15:40:59 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:01.138 15:40:59 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:01.138 15:40:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:01.138 ************************************ 00:03:01.138 START TEST hugepages 00:03:01.138 ************************************ 00:03:01.138 15:40:59 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.138 * Looking for test storage... 00:03:01.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 37985408 kB' 'MemAvailable: 42658992 kB' 'Buffers: 2696 kB' 'Cached: 14239584 kB' 'SwapCached: 0 kB' 'Active: 10281028 kB' 'Inactive: 4455504 kB' 'Active(anon): 9714576 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497676 kB' 'Mapped: 207620 kB' 'Shmem: 9220324 kB' 'KReclaimable: 297532 kB' 'Slab: 939060 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 641528 kB' 'KernelStack: 22000 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 11079716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.138 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.139 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:01.140 15:40:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:01.140 15:40:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:01.140 15:40:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:01.140 15:40:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.399 ************************************ 00:03:01.399 START TEST default_setup 00:03:01.399 ************************************ 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.399 15:40:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.684 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.684 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.067 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40089716 kB' 'MemAvailable: 44763300 kB' 'Buffers: 2696 kB' 'Cached: 14239724 kB' 'SwapCached: 0 kB' 'Active: 10298236 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731784 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514760 kB' 'Mapped: 207876 kB' 'Shmem: 9220464 kB' 'KReclaimable: 297532 kB' 'Slab: 936068 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638536 kB' 'KernelStack: 22272 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11088552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.067 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.068 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40089444 kB' 'MemAvailable: 44763028 kB' 'Buffers: 2696 kB' 'Cached: 14239724 kB' 'SwapCached: 0 kB' 'Active: 10297640 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731188 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514168 kB' 'Mapped: 207836 kB' 'Shmem: 9220464 kB' 'KReclaimable: 297532 kB' 'Slab: 936032 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638500 kB' 'KernelStack: 22176 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11088572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216472 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.069 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.070 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40089348 kB' 'MemAvailable: 44762932 kB' 'Buffers: 2696 kB' 'Cached: 14239724 kB' 'SwapCached: 0 kB' 'Active: 10297564 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731112 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514084 kB' 'Mapped: 207844 kB' 'Shmem: 9220464 kB' 'KReclaimable: 297532 kB' 'Slab: 936032 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638500 kB' 'KernelStack: 22192 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11088592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.071 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.072 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.073 nr_hugepages=1024 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.073 resv_hugepages=0 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.073 surplus_hugepages=0 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.073 anon_hugepages=0 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.073 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40089696 kB' 'MemAvailable: 44763280 kB' 'Buffers: 2696 kB' 'Cached: 14239776 kB' 'SwapCached: 0 kB' 'Active: 10297456 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731004 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513840 kB' 'Mapped: 207732 kB' 'Shmem: 9220516 kB' 'KReclaimable: 297532 kB' 'Slab: 935992 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638460 kB' 'KernelStack: 22160 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11088612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.074 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.075 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18884232 kB' 'MemUsed: 13754908 kB' 'SwapCached: 0 kB' 'Active: 6658304 kB' 'Inactive: 3367500 kB' 'Active(anon): 6369240 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3367500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9746252 kB' 'Mapped: 129116 kB' 'AnonPages: 282912 kB' 'Shmem: 6089688 kB' 'KernelStack: 11176 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173576 kB' 'Slab: 512668 kB' 'SReclaimable: 173576 kB' 'SUnreclaim: 339092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.076 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.077 node0=1024 expecting 1024 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.077 00:03:06.077 real 0m4.872s 00:03:06.077 user 0m1.088s 00:03:06.077 sys 0m2.182s 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:06.077 15:41:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:06.077 ************************************ 00:03:06.077 END TEST default_setup 00:03:06.077 ************************************ 00:03:06.077 15:41:04 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:06.077 15:41:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:06.077 15:41:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:06.077 15:41:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.336 ************************************ 00:03:06.336 START TEST per_node_1G_alloc 00:03:06.336 ************************************ 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.336 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.628 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.628 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40103476 kB' 'MemAvailable: 44777060 kB' 'Buffers: 2696 kB' 'Cached: 14239872 kB' 'SwapCached: 0 kB' 'Active: 10296120 kB' 'Inactive: 4455504 kB' 'Active(anon): 9729668 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512288 kB' 'Mapped: 206708 kB' 'Shmem: 9220612 kB' 'KReclaimable: 297532 kB' 'Slab: 936504 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638972 kB' 'KernelStack: 22048 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.628 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.629 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40103728 kB' 'MemAvailable: 44777312 kB' 'Buffers: 2696 kB' 'Cached: 14239872 kB' 'SwapCached: 0 kB' 'Active: 10296756 kB' 'Inactive: 4455504 kB' 'Active(anon): 9730304 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512988 kB' 'Mapped: 206708 kB' 'Shmem: 9220612 kB' 'KReclaimable: 297532 kB' 'Slab: 936524 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638992 kB' 'KernelStack: 22032 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.630 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.631 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40104152 kB' 'MemAvailable: 44777736 kB' 'Buffers: 2696 kB' 'Cached: 14239892 kB' 'SwapCached: 0 kB' 'Active: 10296176 kB' 'Inactive: 4455504 kB' 'Active(anon): 9729724 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512440 kB' 'Mapped: 206700 kB' 'Shmem: 9220632 kB' 'KReclaimable: 297532 kB' 'Slab: 936732 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 639200 kB' 'KernelStack: 22032 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.632 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.633 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.634 nr_hugepages=1024 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.634 resv_hugepages=0 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.634 surplus_hugepages=0 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.634 anon_hugepages=0 00:03:09.634 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40104152 kB' 'MemAvailable: 44777736 kB' 'Buffers: 2696 kB' 'Cached: 14239916 kB' 'SwapCached: 0 kB' 'Active: 10296212 kB' 'Inactive: 4455504 kB' 'Active(anon): 9729760 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512440 kB' 'Mapped: 206700 kB' 'Shmem: 9220656 kB' 'KReclaimable: 297532 kB' 'Slab: 936732 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 639200 kB' 'KernelStack: 22032 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.635 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.636 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19939844 kB' 'MemUsed: 12699296 kB' 'SwapCached: 0 kB' 'Active: 6657912 kB' 'Inactive: 3367500 kB' 'Active(anon): 6368848 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3367500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9746400 kB' 'Mapped: 128092 kB' 'AnonPages: 282216 kB' 'Shmem: 6089836 kB' 'KernelStack: 11176 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173576 kB' 'Slab: 513100 kB' 'SReclaimable: 173576 kB' 'SUnreclaim: 339524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.637 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.638 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20168560 kB' 'MemUsed: 7487508 kB' 'SwapCached: 0 kB' 'Active: 3638608 kB' 'Inactive: 1088004 kB' 'Active(anon): 3361220 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1088004 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4496232 kB' 'Mapped: 78608 kB' 'AnonPages: 230544 kB' 'Shmem: 3130840 kB' 'KernelStack: 10856 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123956 kB' 'Slab: 423632 kB' 'SReclaimable: 123956 kB' 'SUnreclaim: 299676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.639 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.899 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.900 node0=512 expecting 512 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:09.900 node1=512 expecting 512 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:09.900 00:03:09.900 real 0m3.550s 00:03:09.900 user 0m1.335s 00:03:09.900 sys 0m2.276s 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:09.900 15:41:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:09.900 ************************************ 00:03:09.900 END TEST per_node_1G_alloc 00:03:09.900 ************************************ 00:03:09.900 15:41:08 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:09.900 15:41:08 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:09.900 15:41:08 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:09.900 15:41:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.900 ************************************ 00:03:09.900 START TEST even_2G_alloc 00:03:09.900 ************************************ 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.900 15:41:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.189 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.189 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.190 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40106664 kB' 'MemAvailable: 44780248 kB' 'Buffers: 2696 kB' 'Cached: 14240036 kB' 'SwapCached: 0 kB' 'Active: 10295864 kB' 'Inactive: 4455504 kB' 'Active(anon): 9729412 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511624 kB' 'Mapped: 206740 kB' 'Shmem: 9220776 kB' 'KReclaimable: 297532 kB' 'Slab: 936408 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638876 kB' 'KernelStack: 22032 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.190 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.191 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40107076 kB' 'MemAvailable: 44780660 kB' 'Buffers: 2696 kB' 'Cached: 14240036 kB' 'SwapCached: 0 kB' 'Active: 10295924 kB' 'Inactive: 4455504 kB' 'Active(anon): 9729472 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511648 kB' 'Mapped: 206732 kB' 'Shmem: 9220776 kB' 'KReclaimable: 297532 kB' 'Slab: 936400 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638868 kB' 'KernelStack: 22032 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.192 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.193 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40107092 kB' 'MemAvailable: 44780676 kB' 'Buffers: 2696 kB' 'Cached: 14240056 kB' 'SwapCached: 0 kB' 'Active: 10295068 kB' 'Inactive: 4455504 kB' 'Active(anon): 9728616 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511212 kB' 'Mapped: 206656 kB' 'Shmem: 9220796 kB' 'KReclaimable: 297532 kB' 'Slab: 936392 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638860 kB' 'KernelStack: 22016 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.194 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.195 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.196 nr_hugepages=1024 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.196 resv_hugepages=0 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.196 surplus_hugepages=0 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.196 anon_hugepages=0 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40107780 kB' 'MemAvailable: 44781364 kB' 'Buffers: 2696 kB' 'Cached: 14240080 kB' 'SwapCached: 0 kB' 'Active: 10295048 kB' 'Inactive: 4455504 kB' 'Active(anon): 9728596 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511208 kB' 'Mapped: 206656 kB' 'Shmem: 9220820 kB' 'KReclaimable: 297532 kB' 'Slab: 936392 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 638860 kB' 'KernelStack: 22016 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11075996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.196 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.197 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19954524 kB' 'MemUsed: 12684616 kB' 'SwapCached: 0 kB' 'Active: 6656648 kB' 'Inactive: 3367500 kB' 'Active(anon): 6367584 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3367500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9746520 kB' 'Mapped: 128048 kB' 'AnonPages: 280976 kB' 'Shmem: 6089956 kB' 'KernelStack: 11192 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173576 kB' 'Slab: 512828 kB' 'SReclaimable: 173576 kB' 'SUnreclaim: 339252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.198 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.199 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20152752 kB' 'MemUsed: 7503316 kB' 'SwapCached: 0 kB' 'Active: 3638400 kB' 'Inactive: 1088004 kB' 'Active(anon): 3361012 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1088004 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4496256 kB' 'Mapped: 78608 kB' 'AnonPages: 230232 kB' 'Shmem: 3130864 kB' 'KernelStack: 10824 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123956 kB' 'Slab: 423564 kB' 'SReclaimable: 123956 kB' 'SUnreclaim: 299608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.200 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:13.201 node0=512 expecting 512 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:13.201 node1=512 expecting 512 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:13.201 00:03:13.201 real 0m3.155s 00:03:13.201 user 0m1.071s 00:03:13.201 sys 0m2.076s 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:13.201 15:41:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.201 ************************************ 00:03:13.201 END TEST even_2G_alloc 00:03:13.201 ************************************ 00:03:13.201 15:41:11 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:13.201 15:41:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:13.201 15:41:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:13.201 15:41:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.201 ************************************ 00:03:13.201 START TEST odd_alloc 00:03:13.201 ************************************ 00:03:13.201 15:41:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:13.201 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:13.201 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:13.201 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.201 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.202 15:41:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.514 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.514 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.514 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40113888 kB' 'MemAvailable: 44787472 kB' 'Buffers: 2696 kB' 'Cached: 14240196 kB' 'SwapCached: 0 kB' 'Active: 10297876 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731424 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513356 kB' 'Mapped: 206748 kB' 'Shmem: 9220936 kB' 'KReclaimable: 297532 kB' 'Slab: 937136 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 639604 kB' 'KernelStack: 22048 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11076612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.515 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40113696 kB' 'MemAvailable: 44787280 kB' 'Buffers: 2696 kB' 'Cached: 14240200 kB' 'SwapCached: 0 kB' 'Active: 10297744 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731292 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513188 kB' 'Mapped: 206748 kB' 'Shmem: 9220940 kB' 'KReclaimable: 297532 kB' 'Slab: 937136 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 639604 kB' 'KernelStack: 22032 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11076628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.516 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.517 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40113436 kB' 'MemAvailable: 44787020 kB' 'Buffers: 2696 kB' 'Cached: 14240216 kB' 'SwapCached: 0 kB' 'Active: 10296460 kB' 'Inactive: 4455504 kB' 'Active(anon): 9730008 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512300 kB' 'Mapped: 206660 kB' 'Shmem: 9220956 kB' 'KReclaimable: 297532 kB' 'Slab: 937092 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 639560 kB' 'KernelStack: 21984 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11076652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.518 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.781 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.782 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:16.783 nr_hugepages=1025 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.783 resv_hugepages=0 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.783 surplus_hugepages=0 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.783 anon_hugepages=0 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40113884 kB' 'MemAvailable: 44787468 kB' 'Buffers: 2696 kB' 'Cached: 14240236 kB' 'SwapCached: 0 kB' 'Active: 10296516 kB' 'Inactive: 4455504 kB' 'Active(anon): 9730064 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512376 kB' 'Mapped: 206660 kB' 'Shmem: 9220976 kB' 'KReclaimable: 297532 kB' 'Slab: 937092 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 639560 kB' 'KernelStack: 22016 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11076672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.783 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.784 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19943640 kB' 'MemUsed: 12695500 kB' 'SwapCached: 0 kB' 'Active: 6659184 kB' 'Inactive: 3367500 kB' 'Active(anon): 6370120 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3367500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9746676 kB' 'Mapped: 128052 kB' 'AnonPages: 283228 kB' 'Shmem: 6090112 kB' 'KernelStack: 11176 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173576 kB' 'Slab: 513328 kB' 'SReclaimable: 173576 kB' 'SUnreclaim: 339752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.785 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.786 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20170252 kB' 'MemUsed: 7485816 kB' 'SwapCached: 0 kB' 'Active: 3637044 kB' 'Inactive: 1088004 kB' 'Active(anon): 3359656 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1088004 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4496256 kB' 'Mapped: 78608 kB' 'AnonPages: 228864 kB' 'Shmem: 3130864 kB' 'KernelStack: 10840 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123956 kB' 'Slab: 423764 kB' 'SReclaimable: 123956 kB' 'SUnreclaim: 299808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.787 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:16.788 node0=512 expecting 513 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:16.788 node1=513 expecting 512 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:16.788 00:03:16.788 real 0m3.685s 00:03:16.788 user 0m1.404s 00:03:16.788 sys 0m2.339s 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:16.788 15:41:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.788 ************************************ 00:03:16.788 END TEST odd_alloc 00:03:16.788 ************************************ 00:03:16.788 15:41:15 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:16.788 15:41:15 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.788 15:41:15 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.788 15:41:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.788 ************************************ 00:03:16.788 START TEST custom_alloc 00:03:16.788 ************************************ 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:16.788 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.789 15:41:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.076 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.076 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.076 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39075080 kB' 'MemAvailable: 43748664 kB' 'Buffers: 2696 kB' 'Cached: 14240360 kB' 'SwapCached: 0 kB' 'Active: 10297948 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731496 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513176 kB' 'Mapped: 206776 kB' 'Shmem: 9221100 kB' 'KReclaimable: 297532 kB' 'Slab: 937820 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 640288 kB' 'KernelStack: 22032 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11077136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.077 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.078 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39075148 kB' 'MemAvailable: 43748732 kB' 'Buffers: 2696 kB' 'Cached: 14240364 kB' 'SwapCached: 0 kB' 'Active: 10297644 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731192 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512916 kB' 'Mapped: 206760 kB' 'Shmem: 9221104 kB' 'KReclaimable: 297532 kB' 'Slab: 937820 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 640288 kB' 'KernelStack: 22016 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11077152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.079 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.080 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.081 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39074908 kB' 'MemAvailable: 43748492 kB' 'Buffers: 2696 kB' 'Cached: 14240380 kB' 'SwapCached: 0 kB' 'Active: 10297176 kB' 'Inactive: 4455504 kB' 'Active(anon): 9730724 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512876 kB' 'Mapped: 206684 kB' 'Shmem: 9221120 kB' 'KReclaimable: 297532 kB' 'Slab: 937772 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 640240 kB' 'KernelStack: 22016 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11077176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.082 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.083 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.084 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:20.085 nr_hugepages=1536 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.085 resv_hugepages=0 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.085 surplus_hugepages=0 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.085 anon_hugepages=0 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39075312 kB' 'MemAvailable: 43748896 kB' 'Buffers: 2696 kB' 'Cached: 14240400 kB' 'SwapCached: 0 kB' 'Active: 10297180 kB' 'Inactive: 4455504 kB' 'Active(anon): 9730728 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512872 kB' 'Mapped: 206684 kB' 'Shmem: 9221140 kB' 'KReclaimable: 297532 kB' 'Slab: 937772 kB' 'SReclaimable: 297532 kB' 'SUnreclaim: 640240 kB' 'KernelStack: 22016 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11077196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.085 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.086 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19946072 kB' 'MemUsed: 12693068 kB' 'SwapCached: 0 kB' 'Active: 6659172 kB' 'Inactive: 3367500 kB' 'Active(anon): 6370108 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3367500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9746784 kB' 'Mapped: 128076 kB' 'AnonPages: 283020 kB' 'Shmem: 6090220 kB' 'KernelStack: 11112 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173576 kB' 'Slab: 513728 kB' 'SReclaimable: 173576 kB' 'SUnreclaim: 340152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.087 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.088 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 19128988 kB' 'MemUsed: 8527080 kB' 'SwapCached: 0 kB' 'Active: 3638672 kB' 'Inactive: 1088004 kB' 'Active(anon): 3361284 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1088004 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4496356 kB' 'Mapped: 78608 kB' 'AnonPages: 230512 kB' 'Shmem: 3130964 kB' 'KernelStack: 10936 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123956 kB' 'Slab: 424044 kB' 'SReclaimable: 123956 kB' 'SUnreclaim: 300088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.089 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.090 node0=512 expecting 512 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.090 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:20.090 node1=1024 expecting 1024 00:03:20.091 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:20.091 00:03:20.091 real 0m3.042s 00:03:20.091 user 0m0.971s 00:03:20.091 sys 0m1.888s 00:03:20.091 15:41:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:20.091 15:41:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.091 ************************************ 00:03:20.091 END TEST custom_alloc 00:03:20.091 ************************************ 00:03:20.091 15:41:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:20.091 15:41:18 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:20.091 15:41:18 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:20.091 15:41:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.091 ************************************ 00:03:20.091 START TEST no_shrink_alloc 00:03:20.091 ************************************ 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.091 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.379 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.379 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.380 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.380 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.380 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.380 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.380 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.380 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.380 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40106224 kB' 'MemAvailable: 44779792 kB' 'Buffers: 2696 kB' 'Cached: 14240520 kB' 'SwapCached: 0 kB' 'Active: 10298080 kB' 'Inactive: 4455504 kB' 'Active(anon): 9731628 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513676 kB' 'Mapped: 206792 kB' 'Shmem: 9221260 kB' 'KReclaimable: 297500 kB' 'Slab: 937436 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 639936 kB' 'KernelStack: 22128 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11081080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216712 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.380 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.381 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40105388 kB' 'MemAvailable: 44778956 kB' 'Buffers: 2696 kB' 'Cached: 14240524 kB' 'SwapCached: 0 kB' 'Active: 10298452 kB' 'Inactive: 4455504 kB' 'Active(anon): 9732000 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514184 kB' 'Mapped: 206744 kB' 'Shmem: 9221264 kB' 'KReclaimable: 297500 kB' 'Slab: 937544 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 640044 kB' 'KernelStack: 22096 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11081096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216680 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.382 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.383 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40105952 kB' 'MemAvailable: 44779520 kB' 'Buffers: 2696 kB' 'Cached: 14240540 kB' 'SwapCached: 0 kB' 'Active: 10298680 kB' 'Inactive: 4455504 kB' 'Active(anon): 9732228 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514392 kB' 'Mapped: 206736 kB' 'Shmem: 9221280 kB' 'KReclaimable: 297500 kB' 'Slab: 937544 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 640044 kB' 'KernelStack: 22112 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11081120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216616 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.384 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.385 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.386 nr_hugepages=1024 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.386 resv_hugepages=0 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.386 surplus_hugepages=0 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.386 anon_hugepages=0 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40107084 kB' 'MemAvailable: 44780652 kB' 'Buffers: 2696 kB' 'Cached: 14240544 kB' 'SwapCached: 0 kB' 'Active: 10298536 kB' 'Inactive: 4455504 kB' 'Active(anon): 9732084 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514284 kB' 'Mapped: 206744 kB' 'Shmem: 9221284 kB' 'KReclaimable: 297500 kB' 'Slab: 937544 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 640044 kB' 'KernelStack: 22112 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11081140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216696 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.386 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.387 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18904792 kB' 'MemUsed: 13734348 kB' 'SwapCached: 0 kB' 'Active: 6658892 kB' 'Inactive: 3367500 kB' 'Active(anon): 6369828 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3367500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9746864 kB' 'Mapped: 128128 kB' 'AnonPages: 282752 kB' 'Shmem: 6090300 kB' 'KernelStack: 11128 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173544 kB' 'Slab: 513532 kB' 'SReclaimable: 173544 kB' 'SUnreclaim: 339988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.388 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.389 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.390 node0=1024 expecting 1024 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.390 15:41:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.671 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.671 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.671 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40121500 kB' 'MemAvailable: 44795068 kB' 'Buffers: 2696 kB' 'Cached: 14240652 kB' 'SwapCached: 0 kB' 'Active: 10304464 kB' 'Inactive: 4455504 kB' 'Active(anon): 9738012 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519468 kB' 'Mapped: 207340 kB' 'Shmem: 9221392 kB' 'KReclaimable: 297500 kB' 'Slab: 938128 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 640628 kB' 'KernelStack: 21968 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11086104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216652 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.672 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.673 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40122872 kB' 'MemAvailable: 44796440 kB' 'Buffers: 2696 kB' 'Cached: 14240656 kB' 'SwapCached: 0 kB' 'Active: 10299956 kB' 'Inactive: 4455504 kB' 'Active(anon): 9733504 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515448 kB' 'Mapped: 207260 kB' 'Shmem: 9221396 kB' 'KReclaimable: 297500 kB' 'Slab: 938144 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 640644 kB' 'KernelStack: 22064 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11083496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.674 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.675 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40119320 kB' 'MemAvailable: 44792888 kB' 'Buffers: 2696 kB' 'Cached: 14240672 kB' 'SwapCached: 0 kB' 'Active: 10304748 kB' 'Inactive: 4455504 kB' 'Active(anon): 9738296 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519788 kB' 'Mapped: 207260 kB' 'Shmem: 9221412 kB' 'KReclaimable: 297500 kB' 'Slab: 938144 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 640644 kB' 'KernelStack: 22144 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11087384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216604 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.679 nr_hugepages=1024 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.679 resv_hugepages=0 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.679 surplus_hugepages=0 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.679 anon_hugepages=0 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40125260 kB' 'MemAvailable: 44798828 kB' 'Buffers: 2696 kB' 'Cached: 14240696 kB' 'SwapCached: 0 kB' 'Active: 10298808 kB' 'Inactive: 4455504 kB' 'Active(anon): 9732356 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514292 kB' 'Mapped: 206756 kB' 'Shmem: 9221436 kB' 'KReclaimable: 297500 kB' 'Slab: 938144 kB' 'SReclaimable: 297500 kB' 'SUnreclaim: 640644 kB' 'KernelStack: 22128 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11081288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 103488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3218804 kB' 'DirectMap2M: 19535872 kB' 'DirectMap1G: 46137344 kB' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.679 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18913824 kB' 'MemUsed: 13725316 kB' 'SwapCached: 0 kB' 'Active: 6660012 kB' 'Inactive: 3367500 kB' 'Active(anon): 6370948 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3367500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9746956 kB' 'Mapped: 128140 kB' 'AnonPages: 283232 kB' 'Shmem: 6090392 kB' 'KernelStack: 11096 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173544 kB' 'Slab: 514228 kB' 'SReclaimable: 173544 kB' 'SUnreclaim: 340684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.683 node0=1024 expecting 1024 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.683 00:03:26.683 real 0m6.529s 00:03:26.683 user 0m2.245s 00:03:26.683 sys 0m4.227s 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.683 15:41:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.683 ************************************ 00:03:26.683 END TEST no_shrink_alloc 00:03:26.683 ************************************ 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.683 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.683 00:03:26.683 real 0m25.439s 00:03:26.683 user 0m8.325s 00:03:26.683 sys 0m15.401s 00:03:26.683 15:41:24 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.683 15:41:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.683 ************************************ 00:03:26.683 END TEST hugepages 00:03:26.683 ************************************ 00:03:26.683 15:41:25 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:26.683 15:41:25 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:26.683 15:41:25 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.683 15:41:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.683 ************************************ 00:03:26.683 START TEST driver 00:03:26.683 ************************************ 00:03:26.683 15:41:25 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:26.683 * Looking for test storage... 00:03:26.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.684 15:41:25 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:26.684 15:41:25 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.684 15:41:25 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.942 15:41:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:31.942 15:41:29 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:31.942 15:41:29 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:31.942 15:41:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:31.942 ************************************ 00:03:31.942 START TEST guess_driver 00:03:31.942 ************************************ 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:31.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:31.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:31.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:31.942 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:31.943 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:31.943 Looking for driver=vfio-pci 00:03:31.943 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.943 15:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:31.943 15:41:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.943 15:41:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.468 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.468 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.468 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.368 15:41:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.368 15:41:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.368 15:41:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.368 15:41:34 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:36.368 15:41:34 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:36.368 15:41:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.368 15:41:34 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.632 00:03:41.632 real 0m9.552s 00:03:41.632 user 0m2.347s 00:03:41.632 sys 0m4.807s 00:03:41.632 15:41:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.632 15:41:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.632 ************************************ 00:03:41.632 END TEST guess_driver 00:03:41.632 ************************************ 00:03:41.632 00:03:41.632 real 0m14.422s 00:03:41.632 user 0m3.718s 00:03:41.632 sys 0m7.529s 00:03:41.632 15:41:39 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.632 15:41:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.632 ************************************ 00:03:41.632 END TEST driver 00:03:41.632 ************************************ 00:03:41.632 15:41:39 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:41.632 15:41:39 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.632 15:41:39 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.632 15:41:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.632 ************************************ 00:03:41.632 START TEST devices 00:03:41.632 ************************************ 00:03:41.632 15:41:39 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:41.632 * Looking for test storage... 00:03:41.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.632 15:41:39 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:41.632 15:41:39 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:41.632 15:41:39 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.632 15:41:39 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:44.914 15:41:43 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:44.914 No valid GPT data, bailing 00:03:44.914 15:41:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:44.914 15:41:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:44.914 15:41:43 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:44.914 15:41:43 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.914 15:41:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.914 ************************************ 00:03:44.914 START TEST nvme_mount 00:03:44.914 ************************************ 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.914 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:46.286 Creating new GPT entries in memory. 00:03:46.286 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:46.286 other utilities. 00:03:46.286 15:41:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:46.286 15:41:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.286 15:41:44 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.286 15:41:44 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.286 15:41:44 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:47.221 Creating new GPT entries in memory. 00:03:47.221 The operation has completed successfully. 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3544426 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.221 15:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.502 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.502 15:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.761 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.761 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.761 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.761 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.761 15:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.044 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.044 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.044 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.044 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.300 15:41:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.301 15:41:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.585 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.586 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.586 00:03:57.586 real 0m12.308s 00:03:57.586 user 0m3.468s 00:03:57.586 sys 0m6.661s 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:57.586 15:41:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:57.586 ************************************ 00:03:57.586 END TEST nvme_mount 00:03:57.586 ************************************ 00:03:57.586 15:41:55 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:57.586 15:41:55 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:57.586 15:41:55 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.586 15:41:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.586 ************************************ 00:03:57.586 START TEST dm_mount 00:03:57.586 ************************************ 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.586 15:41:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:58.554 Creating new GPT entries in memory. 00:03:58.554 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.554 other utilities. 00:03:58.554 15:41:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.554 15:41:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.554 15:41:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.554 15:41:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.554 15:41:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.490 Creating new GPT entries in memory. 00:03:59.490 The operation has completed successfully. 00:03:59.490 15:41:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.490 15:41:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.490 15:41:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.490 15:41:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.490 15:41:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:00.427 The operation has completed successfully. 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3548847 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.427 15:41:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.686 15:41:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.993 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.993 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.993 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.993 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.993 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.993 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.994 15:42:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.522 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.780 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.780 15:42:05 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:07.037 00:04:07.037 real 0m9.509s 00:04:07.037 user 0m2.239s 00:04:07.037 sys 0m4.302s 00:04:07.037 15:42:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.037 15:42:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:07.037 ************************************ 00:04:07.037 END TEST dm_mount 00:04:07.037 ************************************ 00:04:07.037 15:42:05 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:07.037 15:42:05 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:07.037 15:42:05 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.037 15:42:05 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.037 15:42:05 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.037 15:42:05 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.037 15:42:05 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.293 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.293 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.293 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.293 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.293 15:42:05 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:07.293 15:42:05 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.293 15:42:05 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.293 15:42:05 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.293 15:42:05 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.293 15:42:05 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.293 15:42:05 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:07.293 00:04:07.293 real 0m26.116s 00:04:07.293 user 0m7.166s 00:04:07.293 sys 0m13.699s 00:04:07.293 15:42:05 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.293 15:42:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:07.293 ************************************ 00:04:07.293 END TEST devices 00:04:07.293 ************************************ 00:04:07.293 00:04:07.293 real 1m29.741s 00:04:07.293 user 0m26.600s 00:04:07.293 sys 0m50.965s 00:04:07.293 15:42:05 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.293 15:42:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.293 ************************************ 00:04:07.293 END TEST setup.sh 00:04:07.293 ************************************ 00:04:07.293 15:42:05 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:10.577 Hugepages 00:04:10.577 node hugesize free / total 00:04:10.577 node0 1048576kB 0 / 0 00:04:10.577 node0 2048kB 2048 / 2048 00:04:10.577 node1 1048576kB 0 / 0 00:04:10.577 node1 2048kB 0 / 0 00:04:10.577 00:04:10.577 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.577 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:10.577 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:10.577 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:10.577 15:42:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:10.577 15:42:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:10.577 15:42:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:10.577 15:42:08 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.103 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.361 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:15.260 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.260 15:42:13 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:16.255 15:42:14 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:16.255 15:42:14 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:16.255 15:42:14 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.255 15:42:14 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:16.255 15:42:14 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:16.255 15:42:14 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:16.255 15:42:14 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.255 15:42:14 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.255 15:42:14 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:16.255 15:42:14 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:16.255 15:42:14 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:04:16.255 15:42:14 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.539 Waiting for block devices as requested 00:04:19.539 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.539 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.539 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.539 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.798 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.798 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.798 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.057 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.057 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:20.057 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:20.315 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:20.315 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:20.315 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.574 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.574 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.574 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.833 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:20.833 15:42:19 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:20.833 15:42:19 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:04:20.833 15:42:19 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:20.833 15:42:19 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:20.833 15:42:19 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:20.833 15:42:19 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:20.833 15:42:19 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:04:20.833 15:42:19 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:20.833 15:42:19 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:20.833 15:42:19 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:20.833 15:42:19 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:20.833 15:42:19 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:20.833 15:42:19 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:20.833 15:42:19 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:20.833 15:42:19 -- common/autotest_common.sh@1553 -- # continue 00:04:20.833 15:42:19 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:20.833 15:42:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.833 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.833 15:42:19 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:20.833 15:42:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:20.833 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.833 15:42:19 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.121 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.121 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.121 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.379 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.282 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.282 15:42:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:26.282 15:42:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.282 15:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.282 15:42:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:26.282 15:42:24 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:26.282 15:42:24 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.282 15:42:24 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:26.282 15:42:24 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:26.282 15:42:24 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:26.282 15:42:24 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:26.282 15:42:24 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:26.282 15:42:24 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.282 15:42:24 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:26.282 15:42:24 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:26.282 15:42:24 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:26.282 15:42:24 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:04:26.282 15:42:24 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:26.282 15:42:24 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:26.282 15:42:24 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:26.282 15:42:24 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:26.282 15:42:24 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:26.282 15:42:24 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:04:26.282 15:42:24 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:04:26.282 15:42:24 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3559174 00:04:26.282 15:42:24 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.282 15:42:24 -- common/autotest_common.sh@1594 -- # waitforlisten 3559174 00:04:26.282 15:42:24 -- common/autotest_common.sh@827 -- # '[' -z 3559174 ']' 00:04:26.282 15:42:24 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.282 15:42:24 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:26.282 15:42:24 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.282 15:42:24 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:26.282 15:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.282 [2024-05-15 15:42:24.698154] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:04:26.282 [2024-05-15 15:42:24.698216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559174 ] 00:04:26.282 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.282 [2024-05-15 15:42:24.769213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.282 [2024-05-15 15:42:24.840170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.218 15:42:25 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:27.218 15:42:25 -- common/autotest_common.sh@860 -- # return 0 00:04:27.218 15:42:25 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:27.218 15:42:25 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:27.218 15:42:25 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:30.503 nvme0n1 00:04:30.503 15:42:28 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:30.503 [2024-05-15 15:42:28.632184] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:30.503 request: 00:04:30.503 { 00:04:30.503 "nvme_ctrlr_name": "nvme0", 00:04:30.503 "password": "test", 00:04:30.503 "method": "bdev_nvme_opal_revert", 00:04:30.503 "req_id": 1 00:04:30.503 } 00:04:30.503 Got JSON-RPC error response 00:04:30.503 response: 00:04:30.503 { 00:04:30.503 "code": -32602, 00:04:30.503 "message": "Invalid parameters" 00:04:30.503 } 00:04:30.503 15:42:28 -- common/autotest_common.sh@1600 -- # true 00:04:30.503 15:42:28 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:30.503 15:42:28 -- common/autotest_common.sh@1604 -- # killprocess 3559174 00:04:30.503 15:42:28 -- common/autotest_common.sh@946 -- # '[' -z 3559174 ']' 00:04:30.503 15:42:28 -- common/autotest_common.sh@950 -- # kill -0 3559174 00:04:30.503 15:42:28 -- common/autotest_common.sh@951 -- # uname 00:04:30.503 15:42:28 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:30.503 15:42:28 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3559174 00:04:30.503 15:42:28 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:30.503 15:42:28 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:30.503 15:42:28 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3559174' 00:04:30.503 killing process with pid 3559174 00:04:30.503 15:42:28 -- common/autotest_common.sh@965 -- # kill 3559174 00:04:30.503 15:42:28 -- common/autotest_common.sh@970 -- # wait 3559174 00:04:32.405 15:42:30 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:32.405 15:42:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:32.405 15:42:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.405 15:42:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.405 15:42:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:32.405 15:42:30 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:32.405 15:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:32.405 15:42:30 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.405 15:42:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.405 15:42:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.405 15:42:30 -- common/autotest_common.sh@10 -- # set +x 00:04:32.663 ************************************ 00:04:32.664 START TEST env 00:04:32.664 ************************************ 00:04:32.664 15:42:30 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.664 * Looking for test storage... 00:04:32.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:32.664 15:42:31 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.664 15:42:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.664 15:42:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.664 15:42:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.664 ************************************ 00:04:32.664 START TEST env_memory 00:04:32.664 ************************************ 00:04:32.664 15:42:31 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.664 00:04:32.664 00:04:32.664 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.664 http://cunit.sourceforge.net/ 00:04:32.664 00:04:32.664 00:04:32.664 Suite: memory 00:04:32.664 Test: alloc and free memory map ...[2024-05-15 15:42:31.185316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.664 passed 00:04:32.664 Test: mem map translation ...[2024-05-15 15:42:31.203154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.664 [2024-05-15 15:42:31.203169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.664 [2024-05-15 15:42:31.203206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.664 [2024-05-15 15:42:31.203214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.924 passed 00:04:32.924 Test: mem map registration ...[2024-05-15 15:42:31.238656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:32.924 [2024-05-15 15:42:31.238673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:32.924 passed 00:04:32.924 Test: mem map adjacent registrations ...passed 00:04:32.924 00:04:32.924 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.924 suites 1 1 n/a 0 0 00:04:32.924 tests 4 4 4 0 0 00:04:32.924 asserts 152 152 152 0 n/a 00:04:32.924 00:04:32.924 Elapsed time = 0.131 seconds 00:04:32.924 00:04:32.924 real 0m0.145s 00:04:32.924 user 0m0.135s 00:04:32.924 sys 0m0.010s 00:04:32.924 15:42:31 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.924 15:42:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.924 ************************************ 00:04:32.924 END TEST env_memory 00:04:32.924 ************************************ 00:04:32.924 15:42:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.924 15:42:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.924 15:42:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.924 15:42:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.924 ************************************ 00:04:32.924 START TEST env_vtophys 00:04:32.924 ************************************ 00:04:32.924 15:42:31 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.924 EAL: lib.eal log level changed from notice to debug 00:04:32.924 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.924 EAL: Detected lcore 1 as core 1 on socket 0 00:04:32.924 EAL: Detected lcore 2 as core 2 on socket 0 00:04:32.924 EAL: Detected lcore 3 as core 3 on socket 0 00:04:32.924 EAL: Detected lcore 4 as core 4 on socket 0 00:04:32.924 EAL: Detected lcore 5 as core 5 on socket 0 00:04:32.924 EAL: Detected lcore 6 as core 6 on socket 0 00:04:32.924 EAL: Detected lcore 7 as core 8 on socket 0 00:04:32.924 EAL: Detected lcore 8 as core 9 on socket 0 00:04:32.924 EAL: Detected lcore 9 as core 10 on socket 0 00:04:32.924 EAL: Detected lcore 10 as core 11 on socket 0 00:04:32.924 EAL: Detected lcore 11 as core 12 on socket 0 00:04:32.924 EAL: Detected lcore 12 as core 13 on socket 0 00:04:32.924 EAL: Detected lcore 13 as core 14 on socket 0 00:04:32.924 EAL: Detected lcore 14 as core 16 on socket 0 00:04:32.924 EAL: Detected lcore 15 as core 17 on socket 0 00:04:32.924 EAL: Detected lcore 16 as core 18 on socket 0 00:04:32.924 EAL: Detected lcore 17 as core 19 on socket 0 00:04:32.924 EAL: Detected lcore 18 as core 20 on socket 0 00:04:32.924 EAL: Detected lcore 19 as core 21 on socket 0 00:04:32.924 EAL: Detected lcore 20 as core 22 on socket 0 00:04:32.924 EAL: Detected lcore 21 as core 24 on socket 0 00:04:32.924 EAL: Detected lcore 22 as core 25 on socket 0 00:04:32.924 EAL: Detected lcore 23 as core 26 on socket 0 00:04:32.924 EAL: Detected lcore 24 as core 27 on socket 0 00:04:32.924 EAL: Detected lcore 25 as core 28 on socket 0 00:04:32.924 EAL: Detected lcore 26 as core 29 on socket 0 00:04:32.924 EAL: Detected lcore 27 as core 30 on socket 0 00:04:32.924 EAL: Detected lcore 28 as core 0 on socket 1 00:04:32.924 EAL: Detected lcore 29 as core 1 on socket 1 00:04:32.924 EAL: Detected lcore 30 as core 2 on socket 1 00:04:32.924 EAL: Detected lcore 31 as core 3 on socket 1 00:04:32.924 EAL: Detected lcore 32 as core 4 on socket 1 00:04:32.924 EAL: Detected lcore 33 as core 5 on socket 1 00:04:32.924 EAL: Detected lcore 34 as core 6 on socket 1 00:04:32.924 EAL: Detected lcore 35 as core 8 on socket 1 00:04:32.924 EAL: Detected lcore 36 as core 9 on socket 1 00:04:32.924 EAL: Detected lcore 37 as core 10 on socket 1 00:04:32.924 EAL: Detected lcore 38 as core 11 on socket 1 00:04:32.924 EAL: Detected lcore 39 as core 12 on socket 1 00:04:32.924 EAL: Detected lcore 40 as core 13 on socket 1 00:04:32.924 EAL: Detected lcore 41 as core 14 on socket 1 00:04:32.924 EAL: Detected lcore 42 as core 16 on socket 1 00:04:32.924 EAL: Detected lcore 43 as core 17 on socket 1 00:04:32.924 EAL: Detected lcore 44 as core 18 on socket 1 00:04:32.924 EAL: Detected lcore 45 as core 19 on socket 1 00:04:32.924 EAL: Detected lcore 46 as core 20 on socket 1 00:04:32.924 EAL: Detected lcore 47 as core 21 on socket 1 00:04:32.924 EAL: Detected lcore 48 as core 22 on socket 1 00:04:32.924 EAL: Detected lcore 49 as core 24 on socket 1 00:04:32.924 EAL: Detected lcore 50 as core 25 on socket 1 00:04:32.924 EAL: Detected lcore 51 as core 26 on socket 1 00:04:32.924 EAL: Detected lcore 52 as core 27 on socket 1 00:04:32.924 EAL: Detected lcore 53 as core 28 on socket 1 00:04:32.924 EAL: Detected lcore 54 as core 29 on socket 1 00:04:32.924 EAL: Detected lcore 55 as core 30 on socket 1 00:04:32.924 EAL: Detected lcore 56 as core 0 on socket 0 00:04:32.924 EAL: Detected lcore 57 as core 1 on socket 0 00:04:32.924 EAL: Detected lcore 58 as core 2 on socket 0 00:04:32.924 EAL: Detected lcore 59 as core 3 on socket 0 00:04:32.924 EAL: Detected lcore 60 as core 4 on socket 0 00:04:32.924 EAL: Detected lcore 61 as core 5 on socket 0 00:04:32.924 EAL: Detected lcore 62 as core 6 on socket 0 00:04:32.924 EAL: Detected lcore 63 as core 8 on socket 0 00:04:32.924 EAL: Detected lcore 64 as core 9 on socket 0 00:04:32.924 EAL: Detected lcore 65 as core 10 on socket 0 00:04:32.924 EAL: Detected lcore 66 as core 11 on socket 0 00:04:32.924 EAL: Detected lcore 67 as core 12 on socket 0 00:04:32.924 EAL: Detected lcore 68 as core 13 on socket 0 00:04:32.924 EAL: Detected lcore 69 as core 14 on socket 0 00:04:32.924 EAL: Detected lcore 70 as core 16 on socket 0 00:04:32.924 EAL: Detected lcore 71 as core 17 on socket 0 00:04:32.924 EAL: Detected lcore 72 as core 18 on socket 0 00:04:32.924 EAL: Detected lcore 73 as core 19 on socket 0 00:04:32.924 EAL: Detected lcore 74 as core 20 on socket 0 00:04:32.924 EAL: Detected lcore 75 as core 21 on socket 0 00:04:32.924 EAL: Detected lcore 76 as core 22 on socket 0 00:04:32.924 EAL: Detected lcore 77 as core 24 on socket 0 00:04:32.924 EAL: Detected lcore 78 as core 25 on socket 0 00:04:32.924 EAL: Detected lcore 79 as core 26 on socket 0 00:04:32.924 EAL: Detected lcore 80 as core 27 on socket 0 00:04:32.924 EAL: Detected lcore 81 as core 28 on socket 0 00:04:32.924 EAL: Detected lcore 82 as core 29 on socket 0 00:04:32.924 EAL: Detected lcore 83 as core 30 on socket 0 00:04:32.924 EAL: Detected lcore 84 as core 0 on socket 1 00:04:32.924 EAL: Detected lcore 85 as core 1 on socket 1 00:04:32.924 EAL: Detected lcore 86 as core 2 on socket 1 00:04:32.924 EAL: Detected lcore 87 as core 3 on socket 1 00:04:32.924 EAL: Detected lcore 88 as core 4 on socket 1 00:04:32.924 EAL: Detected lcore 89 as core 5 on socket 1 00:04:32.924 EAL: Detected lcore 90 as core 6 on socket 1 00:04:32.924 EAL: Detected lcore 91 as core 8 on socket 1 00:04:32.924 EAL: Detected lcore 92 as core 9 on socket 1 00:04:32.924 EAL: Detected lcore 93 as core 10 on socket 1 00:04:32.924 EAL: Detected lcore 94 as core 11 on socket 1 00:04:32.924 EAL: Detected lcore 95 as core 12 on socket 1 00:04:32.924 EAL: Detected lcore 96 as core 13 on socket 1 00:04:32.924 EAL: Detected lcore 97 as core 14 on socket 1 00:04:32.924 EAL: Detected lcore 98 as core 16 on socket 1 00:04:32.924 EAL: Detected lcore 99 as core 17 on socket 1 00:04:32.924 EAL: Detected lcore 100 as core 18 on socket 1 00:04:32.924 EAL: Detected lcore 101 as core 19 on socket 1 00:04:32.924 EAL: Detected lcore 102 as core 20 on socket 1 00:04:32.924 EAL: Detected lcore 103 as core 21 on socket 1 00:04:32.924 EAL: Detected lcore 104 as core 22 on socket 1 00:04:32.924 EAL: Detected lcore 105 as core 24 on socket 1 00:04:32.924 EAL: Detected lcore 106 as core 25 on socket 1 00:04:32.924 EAL: Detected lcore 107 as core 26 on socket 1 00:04:32.924 EAL: Detected lcore 108 as core 27 on socket 1 00:04:32.924 EAL: Detected lcore 109 as core 28 on socket 1 00:04:32.924 EAL: Detected lcore 110 as core 29 on socket 1 00:04:32.924 EAL: Detected lcore 111 as core 30 on socket 1 00:04:32.925 EAL: Maximum logical cores by configuration: 128 00:04:32.925 EAL: Detected CPU lcores: 112 00:04:32.925 EAL: Detected NUMA nodes: 2 00:04:32.925 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:32.925 EAL: Detected shared linkage of DPDK 00:04:32.925 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.925 EAL: Bus pci wants IOVA as 'DC' 00:04:32.925 EAL: Buses did not request a specific IOVA mode. 00:04:32.925 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:32.925 EAL: Selected IOVA mode 'VA' 00:04:32.925 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.925 EAL: Probing VFIO support... 00:04:32.925 EAL: IOMMU type 1 (Type 1) is supported 00:04:32.925 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:32.925 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:32.925 EAL: VFIO support initialized 00:04:32.925 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.925 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.925 EAL: Setting up physically contiguous memory... 00:04:32.925 EAL: Setting maximum number of open files to 524288 00:04:32.925 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.925 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:32.925 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.925 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:32.925 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.925 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:32.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.925 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.925 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:32.925 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:32.925 EAL: Hugepages will be freed exactly as allocated. 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: TSC frequency is ~2500000 KHz 00:04:32.925 EAL: Main lcore 0 is ready (tid=7f20f2feba00;cpuset=[0]) 00:04:32.925 EAL: Trying to obtain current memory policy. 00:04:32.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.925 EAL: Restoring previous memory policy: 0 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:32.925 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.925 00:04:32.925 00:04:32.925 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.925 http://cunit.sourceforge.net/ 00:04:32.925 00:04:32.925 00:04:32.925 Suite: components_suite 00:04:32.925 Test: vtophys_malloc_test ...passed 00:04:32.925 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.925 EAL: Restoring previous memory policy: 4 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.925 EAL: Trying to obtain current memory policy. 00:04:32.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.925 EAL: Restoring previous memory policy: 4 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.925 EAL: Trying to obtain current memory policy. 00:04:32.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.925 EAL: Restoring previous memory policy: 4 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.925 EAL: Trying to obtain current memory policy. 00:04:32.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.925 EAL: Restoring previous memory policy: 4 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.925 EAL: Trying to obtain current memory policy. 00:04:32.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.925 EAL: Restoring previous memory policy: 4 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.925 EAL: Trying to obtain current memory policy. 00:04:32.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.925 EAL: Restoring previous memory policy: 4 00:04:32.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.925 EAL: request: mp_malloc_sync 00:04:32.925 EAL: No shared files mode enabled, IPC is disabled 00:04:32.925 EAL: Heap on socket 0 was expanded by 66MB 00:04:33.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.185 EAL: request: mp_malloc_sync 00:04:33.185 EAL: No shared files mode enabled, IPC is disabled 00:04:33.185 EAL: Heap on socket 0 was shrunk by 66MB 00:04:33.185 EAL: Trying to obtain current memory policy. 00:04:33.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.185 EAL: Restoring previous memory policy: 4 00:04:33.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.185 EAL: request: mp_malloc_sync 00:04:33.185 EAL: No shared files mode enabled, IPC is disabled 00:04:33.185 EAL: Heap on socket 0 was expanded by 130MB 00:04:33.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.185 EAL: request: mp_malloc_sync 00:04:33.185 EAL: No shared files mode enabled, IPC is disabled 00:04:33.185 EAL: Heap on socket 0 was shrunk by 130MB 00:04:33.185 EAL: Trying to obtain current memory policy. 00:04:33.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.185 EAL: Restoring previous memory policy: 4 00:04:33.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.185 EAL: request: mp_malloc_sync 00:04:33.185 EAL: No shared files mode enabled, IPC is disabled 00:04:33.185 EAL: Heap on socket 0 was expanded by 258MB 00:04:33.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.185 EAL: request: mp_malloc_sync 00:04:33.185 EAL: No shared files mode enabled, IPC is disabled 00:04:33.185 EAL: Heap on socket 0 was shrunk by 258MB 00:04:33.185 EAL: Trying to obtain current memory policy. 00:04:33.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.444 EAL: Restoring previous memory policy: 4 00:04:33.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.444 EAL: request: mp_malloc_sync 00:04:33.444 EAL: No shared files mode enabled, IPC is disabled 00:04:33.444 EAL: Heap on socket 0 was expanded by 514MB 00:04:33.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.444 EAL: request: mp_malloc_sync 00:04:33.444 EAL: No shared files mode enabled, IPC is disabled 00:04:33.444 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.444 EAL: Trying to obtain current memory policy. 00:04:33.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.702 EAL: Restoring previous memory policy: 4 00:04:33.702 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.702 EAL: request: mp_malloc_sync 00:04:33.702 EAL: No shared files mode enabled, IPC is disabled 00:04:33.702 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.961 EAL: request: mp_malloc_sync 00:04:33.961 EAL: No shared files mode enabled, IPC is disabled 00:04:33.961 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.961 passed 00:04:33.961 00:04:33.961 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.961 suites 1 1 n/a 0 0 00:04:33.961 tests 2 2 2 0 0 00:04:33.961 asserts 497 497 497 0 n/a 00:04:33.961 00:04:33.961 Elapsed time = 0.961 seconds 00:04:33.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.961 EAL: request: mp_malloc_sync 00:04:33.961 EAL: No shared files mode enabled, IPC is disabled 00:04:33.961 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.961 EAL: No shared files mode enabled, IPC is disabled 00:04:33.961 EAL: No shared files mode enabled, IPC is disabled 00:04:33.961 EAL: No shared files mode enabled, IPC is disabled 00:04:33.961 00:04:33.961 real 0m1.088s 00:04:33.961 user 0m0.637s 00:04:33.961 sys 0m0.426s 00:04:33.961 15:42:32 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.961 15:42:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.961 ************************************ 00:04:33.961 END TEST env_vtophys 00:04:33.961 ************************************ 00:04:33.961 15:42:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.962 15:42:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.962 15:42:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.962 15:42:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.220 ************************************ 00:04:34.220 START TEST env_pci 00:04:34.220 ************************************ 00:04:34.220 15:42:32 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:34.220 00:04:34.220 00:04:34.220 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.220 http://cunit.sourceforge.net/ 00:04:34.220 00:04:34.220 00:04:34.220 Suite: pci 00:04:34.220 Test: pci_hook ...[2024-05-15 15:42:32.544351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3560708 has claimed it 00:04:34.220 EAL: Cannot find device (10000:00:01.0) 00:04:34.220 EAL: Failed to attach device on primary process 00:04:34.220 passed 00:04:34.220 00:04:34.220 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.220 suites 1 1 n/a 0 0 00:04:34.220 tests 1 1 1 0 0 00:04:34.221 asserts 25 25 25 0 n/a 00:04:34.221 00:04:34.221 Elapsed time = 0.033 seconds 00:04:34.221 00:04:34.221 real 0m0.055s 00:04:34.221 user 0m0.013s 00:04:34.221 sys 0m0.041s 00:04:34.221 15:42:32 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.221 15:42:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:34.221 ************************************ 00:04:34.221 END TEST env_pci 00:04:34.221 ************************************ 00:04:34.221 15:42:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:34.221 15:42:32 env -- env/env.sh@15 -- # uname 00:04:34.221 15:42:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:34.221 15:42:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:34.221 15:42:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.221 15:42:32 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:34.221 15:42:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.221 15:42:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.221 ************************************ 00:04:34.221 START TEST env_dpdk_post_init 00:04:34.221 ************************************ 00:04:34.221 15:42:32 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.221 EAL: Detected CPU lcores: 112 00:04:34.221 EAL: Detected NUMA nodes: 2 00:04:34.221 EAL: Detected shared linkage of DPDK 00:04:34.221 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.221 EAL: Selected IOVA mode 'VA' 00:04:34.221 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.221 EAL: VFIO support initialized 00:04:34.221 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.479 EAL: Using IOMMU type 1 (Type 1) 00:04:34.479 EAL: Ignore mapping IO port bar(1) 00:04:34.479 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:34.479 EAL: Ignore mapping IO port bar(1) 00:04:34.479 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:34.479 EAL: Ignore mapping IO port bar(1) 00:04:34.479 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:34.479 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:34.480 EAL: Ignore mapping IO port bar(1) 00:04:34.480 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:35.456 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:38.731 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:38.731 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:38.990 Starting DPDK initialization... 00:04:38.990 Starting SPDK post initialization... 00:04:38.990 SPDK NVMe probe 00:04:38.990 Attaching to 0000:d8:00.0 00:04:38.990 Attached to 0000:d8:00.0 00:04:38.990 Cleaning up... 00:04:38.990 00:04:38.990 real 0m4.853s 00:04:38.990 user 0m3.586s 00:04:38.990 sys 0m0.322s 00:04:38.990 15:42:37 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.990 15:42:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.990 ************************************ 00:04:38.990 END TEST env_dpdk_post_init 00:04:38.990 ************************************ 00:04:39.248 15:42:37 env -- env/env.sh@26 -- # uname 00:04:39.248 15:42:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.248 15:42:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.248 15:42:37 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.248 15:42:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.248 15:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.248 ************************************ 00:04:39.248 START TEST env_mem_callbacks 00:04:39.248 ************************************ 00:04:39.248 15:42:37 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.248 EAL: Detected CPU lcores: 112 00:04:39.248 EAL: Detected NUMA nodes: 2 00:04:39.248 EAL: Detected shared linkage of DPDK 00:04:39.248 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.248 EAL: Selected IOVA mode 'VA' 00:04:39.248 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.248 EAL: VFIO support initialized 00:04:39.248 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.248 00:04:39.248 00:04:39.248 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.248 http://cunit.sourceforge.net/ 00:04:39.248 00:04:39.248 00:04:39.248 Suite: memory 00:04:39.248 Test: test ... 00:04:39.248 register 0x200000200000 2097152 00:04:39.248 malloc 3145728 00:04:39.248 register 0x200000400000 4194304 00:04:39.248 buf 0x200000500000 len 3145728 PASSED 00:04:39.248 malloc 64 00:04:39.248 buf 0x2000004fff40 len 64 PASSED 00:04:39.248 malloc 4194304 00:04:39.248 register 0x200000800000 6291456 00:04:39.248 buf 0x200000a00000 len 4194304 PASSED 00:04:39.248 free 0x200000500000 3145728 00:04:39.248 free 0x2000004fff40 64 00:04:39.248 unregister 0x200000400000 4194304 PASSED 00:04:39.248 free 0x200000a00000 4194304 00:04:39.248 unregister 0x200000800000 6291456 PASSED 00:04:39.248 malloc 8388608 00:04:39.248 register 0x200000400000 10485760 00:04:39.248 buf 0x200000600000 len 8388608 PASSED 00:04:39.248 free 0x200000600000 8388608 00:04:39.248 unregister 0x200000400000 10485760 PASSED 00:04:39.248 passed 00:04:39.248 00:04:39.248 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.248 suites 1 1 n/a 0 0 00:04:39.248 tests 1 1 1 0 0 00:04:39.248 asserts 15 15 15 0 n/a 00:04:39.248 00:04:39.248 Elapsed time = 0.005 seconds 00:04:39.248 00:04:39.248 real 0m0.064s 00:04:39.248 user 0m0.026s 00:04:39.248 sys 0m0.037s 00:04:39.248 15:42:37 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.248 15:42:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.248 ************************************ 00:04:39.248 END TEST env_mem_callbacks 00:04:39.248 ************************************ 00:04:39.248 00:04:39.248 real 0m6.720s 00:04:39.248 user 0m4.572s 00:04:39.248 sys 0m1.195s 00:04:39.248 15:42:37 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.248 15:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.248 ************************************ 00:04:39.248 END TEST env 00:04:39.248 ************************************ 00:04:39.248 15:42:37 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.248 15:42:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.248 15:42:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.248 15:42:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.248 ************************************ 00:04:39.248 START TEST rpc 00:04:39.248 ************************************ 00:04:39.248 15:42:37 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.506 * Looking for test storage... 00:04:39.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.506 15:42:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3561644 00:04:39.506 15:42:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.506 15:42:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:39.506 15:42:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3561644 00:04:39.506 15:42:37 rpc -- common/autotest_common.sh@827 -- # '[' -z 3561644 ']' 00:04:39.506 15:42:37 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.506 15:42:37 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.506 15:42:37 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.506 15:42:37 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.506 15:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.506 [2024-05-15 15:42:37.960786] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:04:39.506 [2024-05-15 15:42:37.960833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3561644 ] 00:04:39.506 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.506 [2024-05-15 15:42:38.030653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.764 [2024-05-15 15:42:38.104572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.764 [2024-05-15 15:42:38.104607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3561644' to capture a snapshot of events at runtime. 00:04:39.764 [2024-05-15 15:42:38.104616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.764 [2024-05-15 15:42:38.104625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.764 [2024-05-15 15:42:38.104632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3561644 for offline analysis/debug. 00:04:39.764 [2024-05-15 15:42:38.104653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.329 15:42:38 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:40.329 15:42:38 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:40.329 15:42:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.329 15:42:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.329 15:42:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:40.329 15:42:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:40.329 15:42:38 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.329 15:42:38 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.329 15:42:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.329 ************************************ 00:04:40.329 START TEST rpc_integrity 00:04:40.329 ************************************ 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.329 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.329 { 00:04:40.329 "name": "Malloc0", 00:04:40.329 "aliases": [ 00:04:40.329 "61367170-11cd-4afa-9f5f-895603e0ced1" 00:04:40.329 ], 00:04:40.329 "product_name": "Malloc disk", 00:04:40.329 "block_size": 512, 00:04:40.329 "num_blocks": 16384, 00:04:40.329 "uuid": "61367170-11cd-4afa-9f5f-895603e0ced1", 00:04:40.329 "assigned_rate_limits": { 00:04:40.329 "rw_ios_per_sec": 0, 00:04:40.329 "rw_mbytes_per_sec": 0, 00:04:40.329 "r_mbytes_per_sec": 0, 00:04:40.329 "w_mbytes_per_sec": 0 00:04:40.329 }, 00:04:40.329 "claimed": false, 00:04:40.329 "zoned": false, 00:04:40.329 "supported_io_types": { 00:04:40.329 "read": true, 00:04:40.329 "write": true, 00:04:40.329 "unmap": true, 00:04:40.329 "write_zeroes": true, 00:04:40.329 "flush": true, 00:04:40.329 "reset": true, 00:04:40.329 "compare": false, 00:04:40.329 "compare_and_write": false, 00:04:40.329 "abort": true, 00:04:40.329 "nvme_admin": false, 00:04:40.329 "nvme_io": false 00:04:40.329 }, 00:04:40.329 "memory_domains": [ 00:04:40.329 { 00:04:40.329 "dma_device_id": "system", 00:04:40.329 "dma_device_type": 1 00:04:40.329 }, 00:04:40.329 { 00:04:40.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.329 "dma_device_type": 2 00:04:40.329 } 00:04:40.329 ], 00:04:40.329 "driver_specific": {} 00:04:40.329 } 00:04:40.329 ]' 00:04:40.329 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.588 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.588 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:40.588 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.588 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.588 [2024-05-15 15:42:38.928245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:40.588 [2024-05-15 15:42:38.928273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.588 [2024-05-15 15:42:38.928288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20cd190 00:04:40.588 [2024-05-15 15:42:38.928296] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.588 [2024-05-15 15:42:38.929387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.588 [2024-05-15 15:42:38.929409] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.588 Passthru0 00:04:40.588 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.588 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.588 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.588 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.588 15:42:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.588 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.588 { 00:04:40.588 "name": "Malloc0", 00:04:40.588 "aliases": [ 00:04:40.588 "61367170-11cd-4afa-9f5f-895603e0ced1" 00:04:40.588 ], 00:04:40.588 "product_name": "Malloc disk", 00:04:40.588 "block_size": 512, 00:04:40.588 "num_blocks": 16384, 00:04:40.588 "uuid": "61367170-11cd-4afa-9f5f-895603e0ced1", 00:04:40.588 "assigned_rate_limits": { 00:04:40.588 "rw_ios_per_sec": 0, 00:04:40.588 "rw_mbytes_per_sec": 0, 00:04:40.588 "r_mbytes_per_sec": 0, 00:04:40.588 "w_mbytes_per_sec": 0 00:04:40.588 }, 00:04:40.588 "claimed": true, 00:04:40.588 "claim_type": "exclusive_write", 00:04:40.588 "zoned": false, 00:04:40.588 "supported_io_types": { 00:04:40.588 "read": true, 00:04:40.588 "write": true, 00:04:40.588 "unmap": true, 00:04:40.588 "write_zeroes": true, 00:04:40.588 "flush": true, 00:04:40.588 "reset": true, 00:04:40.588 "compare": false, 00:04:40.588 "compare_and_write": false, 00:04:40.588 "abort": true, 00:04:40.588 "nvme_admin": false, 00:04:40.588 "nvme_io": false 00:04:40.588 }, 00:04:40.588 "memory_domains": [ 00:04:40.588 { 00:04:40.588 "dma_device_id": "system", 00:04:40.588 "dma_device_type": 1 00:04:40.588 }, 00:04:40.588 { 00:04:40.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.588 "dma_device_type": 2 00:04:40.588 } 00:04:40.588 ], 00:04:40.588 "driver_specific": {} 00:04:40.588 }, 00:04:40.588 { 00:04:40.588 "name": "Passthru0", 00:04:40.588 "aliases": [ 00:04:40.588 "f76b6fc4-a713-5613-a072-90ca99320346" 00:04:40.588 ], 00:04:40.588 "product_name": "passthru", 00:04:40.588 "block_size": 512, 00:04:40.588 "num_blocks": 16384, 00:04:40.588 "uuid": "f76b6fc4-a713-5613-a072-90ca99320346", 00:04:40.588 "assigned_rate_limits": { 00:04:40.588 "rw_ios_per_sec": 0, 00:04:40.588 "rw_mbytes_per_sec": 0, 00:04:40.588 "r_mbytes_per_sec": 0, 00:04:40.588 "w_mbytes_per_sec": 0 00:04:40.588 }, 00:04:40.588 "claimed": false, 00:04:40.588 "zoned": false, 00:04:40.588 "supported_io_types": { 00:04:40.588 "read": true, 00:04:40.588 "write": true, 00:04:40.588 "unmap": true, 00:04:40.588 "write_zeroes": true, 00:04:40.588 "flush": true, 00:04:40.588 "reset": true, 00:04:40.588 "compare": false, 00:04:40.588 "compare_and_write": false, 00:04:40.588 "abort": true, 00:04:40.588 "nvme_admin": false, 00:04:40.588 "nvme_io": false 00:04:40.588 }, 00:04:40.588 "memory_domains": [ 00:04:40.588 { 00:04:40.588 "dma_device_id": "system", 00:04:40.588 "dma_device_type": 1 00:04:40.588 }, 00:04:40.588 { 00:04:40.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.588 "dma_device_type": 2 00:04:40.588 } 00:04:40.588 ], 00:04:40.588 "driver_specific": { 00:04:40.588 "passthru": { 00:04:40.588 "name": "Passthru0", 00:04:40.588 "base_bdev_name": "Malloc0" 00:04:40.588 } 00:04:40.588 } 00:04:40.588 } 00:04:40.588 ]' 00:04:40.588 15:42:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.588 15:42:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.588 15:42:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.588 15:42:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.588 15:42:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.588 15:42:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.588 15:42:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.588 15:42:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.588 00:04:40.588 real 0m0.292s 00:04:40.588 user 0m0.175s 00:04:40.588 sys 0m0.056s 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.588 15:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.588 ************************************ 00:04:40.588 END TEST rpc_integrity 00:04:40.588 ************************************ 00:04:40.588 15:42:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.588 15:42:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.588 15:42:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.588 15:42:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.846 ************************************ 00:04:40.846 START TEST rpc_plugins 00:04:40.846 ************************************ 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.846 { 00:04:40.846 "name": "Malloc1", 00:04:40.846 "aliases": [ 00:04:40.846 "3b52bb82-14e0-4295-b5e1-bba1a6f0e2bf" 00:04:40.846 ], 00:04:40.846 "product_name": "Malloc disk", 00:04:40.846 "block_size": 4096, 00:04:40.846 "num_blocks": 256, 00:04:40.846 "uuid": "3b52bb82-14e0-4295-b5e1-bba1a6f0e2bf", 00:04:40.846 "assigned_rate_limits": { 00:04:40.846 "rw_ios_per_sec": 0, 00:04:40.846 "rw_mbytes_per_sec": 0, 00:04:40.846 "r_mbytes_per_sec": 0, 00:04:40.846 "w_mbytes_per_sec": 0 00:04:40.846 }, 00:04:40.846 "claimed": false, 00:04:40.846 "zoned": false, 00:04:40.846 "supported_io_types": { 00:04:40.846 "read": true, 00:04:40.846 "write": true, 00:04:40.846 "unmap": true, 00:04:40.846 "write_zeroes": true, 00:04:40.846 "flush": true, 00:04:40.846 "reset": true, 00:04:40.846 "compare": false, 00:04:40.846 "compare_and_write": false, 00:04:40.846 "abort": true, 00:04:40.846 "nvme_admin": false, 00:04:40.846 "nvme_io": false 00:04:40.846 }, 00:04:40.846 "memory_domains": [ 00:04:40.846 { 00:04:40.846 "dma_device_id": "system", 00:04:40.846 "dma_device_type": 1 00:04:40.846 }, 00:04:40.846 { 00:04:40.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.846 "dma_device_type": 2 00:04:40.846 } 00:04:40.846 ], 00:04:40.846 "driver_specific": {} 00:04:40.846 } 00:04:40.846 ]' 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.846 15:42:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.846 00:04:40.846 real 0m0.145s 00:04:40.846 user 0m0.089s 00:04:40.846 sys 0m0.022s 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.846 15:42:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.846 ************************************ 00:04:40.846 END TEST rpc_plugins 00:04:40.846 ************************************ 00:04:40.846 15:42:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.846 15:42:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.846 15:42:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.846 15:42:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.846 ************************************ 00:04:40.846 START TEST rpc_trace_cmd_test 00:04:40.846 ************************************ 00:04:40.846 15:42:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:40.846 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.846 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.846 15:42:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.846 15:42:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.104 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3561644", 00:04:41.104 "tpoint_group_mask": "0x8", 00:04:41.104 "iscsi_conn": { 00:04:41.104 "mask": "0x2", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "scsi": { 00:04:41.104 "mask": "0x4", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "bdev": { 00:04:41.104 "mask": "0x8", 00:04:41.104 "tpoint_mask": "0xffffffffffffffff" 00:04:41.104 }, 00:04:41.104 "nvmf_rdma": { 00:04:41.104 "mask": "0x10", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "nvmf_tcp": { 00:04:41.104 "mask": "0x20", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "ftl": { 00:04:41.104 "mask": "0x40", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "blobfs": { 00:04:41.104 "mask": "0x80", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "dsa": { 00:04:41.104 "mask": "0x200", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "thread": { 00:04:41.104 "mask": "0x400", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "nvme_pcie": { 00:04:41.104 "mask": "0x800", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "iaa": { 00:04:41.104 "mask": "0x1000", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "nvme_tcp": { 00:04:41.104 "mask": "0x2000", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "bdev_nvme": { 00:04:41.104 "mask": "0x4000", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 }, 00:04:41.104 "sock": { 00:04:41.104 "mask": "0x8000", 00:04:41.104 "tpoint_mask": "0x0" 00:04:41.104 } 00:04:41.104 }' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.104 00:04:41.104 real 0m0.232s 00:04:41.104 user 0m0.186s 00:04:41.104 sys 0m0.038s 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.104 15:42:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.104 ************************************ 00:04:41.104 END TEST rpc_trace_cmd_test 00:04:41.104 ************************************ 00:04:41.362 15:42:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.362 15:42:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.362 15:42:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.362 15:42:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.362 15:42:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.362 15:42:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.362 ************************************ 00:04:41.362 START TEST rpc_daemon_integrity 00:04:41.362 ************************************ 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.362 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.363 { 00:04:41.363 "name": "Malloc2", 00:04:41.363 "aliases": [ 00:04:41.363 "b3a8e69e-e744-48eb-9a99-535201d9ae7d" 00:04:41.363 ], 00:04:41.363 "product_name": "Malloc disk", 00:04:41.363 "block_size": 512, 00:04:41.363 "num_blocks": 16384, 00:04:41.363 "uuid": "b3a8e69e-e744-48eb-9a99-535201d9ae7d", 00:04:41.363 "assigned_rate_limits": { 00:04:41.363 "rw_ios_per_sec": 0, 00:04:41.363 "rw_mbytes_per_sec": 0, 00:04:41.363 "r_mbytes_per_sec": 0, 00:04:41.363 "w_mbytes_per_sec": 0 00:04:41.363 }, 00:04:41.363 "claimed": false, 00:04:41.363 "zoned": false, 00:04:41.363 "supported_io_types": { 00:04:41.363 "read": true, 00:04:41.363 "write": true, 00:04:41.363 "unmap": true, 00:04:41.363 "write_zeroes": true, 00:04:41.363 "flush": true, 00:04:41.363 "reset": true, 00:04:41.363 "compare": false, 00:04:41.363 "compare_and_write": false, 00:04:41.363 "abort": true, 00:04:41.363 "nvme_admin": false, 00:04:41.363 "nvme_io": false 00:04:41.363 }, 00:04:41.363 "memory_domains": [ 00:04:41.363 { 00:04:41.363 "dma_device_id": "system", 00:04:41.363 "dma_device_type": 1 00:04:41.363 }, 00:04:41.363 { 00:04:41.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.363 "dma_device_type": 2 00:04:41.363 } 00:04:41.363 ], 00:04:41.363 "driver_specific": {} 00:04:41.363 } 00:04:41.363 ]' 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.363 [2024-05-15 15:42:39.866787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:41.363 [2024-05-15 15:42:39.866813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.363 [2024-05-15 15:42:39.866827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2266080 00:04:41.363 [2024-05-15 15:42:39.866835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.363 [2024-05-15 15:42:39.867781] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.363 [2024-05-15 15:42:39.867802] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.363 Passthru0 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.363 { 00:04:41.363 "name": "Malloc2", 00:04:41.363 "aliases": [ 00:04:41.363 "b3a8e69e-e744-48eb-9a99-535201d9ae7d" 00:04:41.363 ], 00:04:41.363 "product_name": "Malloc disk", 00:04:41.363 "block_size": 512, 00:04:41.363 "num_blocks": 16384, 00:04:41.363 "uuid": "b3a8e69e-e744-48eb-9a99-535201d9ae7d", 00:04:41.363 "assigned_rate_limits": { 00:04:41.363 "rw_ios_per_sec": 0, 00:04:41.363 "rw_mbytes_per_sec": 0, 00:04:41.363 "r_mbytes_per_sec": 0, 00:04:41.363 "w_mbytes_per_sec": 0 00:04:41.363 }, 00:04:41.363 "claimed": true, 00:04:41.363 "claim_type": "exclusive_write", 00:04:41.363 "zoned": false, 00:04:41.363 "supported_io_types": { 00:04:41.363 "read": true, 00:04:41.363 "write": true, 00:04:41.363 "unmap": true, 00:04:41.363 "write_zeroes": true, 00:04:41.363 "flush": true, 00:04:41.363 "reset": true, 00:04:41.363 "compare": false, 00:04:41.363 "compare_and_write": false, 00:04:41.363 "abort": true, 00:04:41.363 "nvme_admin": false, 00:04:41.363 "nvme_io": false 00:04:41.363 }, 00:04:41.363 "memory_domains": [ 00:04:41.363 { 00:04:41.363 "dma_device_id": "system", 00:04:41.363 "dma_device_type": 1 00:04:41.363 }, 00:04:41.363 { 00:04:41.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.363 "dma_device_type": 2 00:04:41.363 } 00:04:41.363 ], 00:04:41.363 "driver_specific": {} 00:04:41.363 }, 00:04:41.363 { 00:04:41.363 "name": "Passthru0", 00:04:41.363 "aliases": [ 00:04:41.363 "0775b89f-9321-503c-ac27-3a52eb68e0b0" 00:04:41.363 ], 00:04:41.363 "product_name": "passthru", 00:04:41.363 "block_size": 512, 00:04:41.363 "num_blocks": 16384, 00:04:41.363 "uuid": "0775b89f-9321-503c-ac27-3a52eb68e0b0", 00:04:41.363 "assigned_rate_limits": { 00:04:41.363 "rw_ios_per_sec": 0, 00:04:41.363 "rw_mbytes_per_sec": 0, 00:04:41.363 "r_mbytes_per_sec": 0, 00:04:41.363 "w_mbytes_per_sec": 0 00:04:41.363 }, 00:04:41.363 "claimed": false, 00:04:41.363 "zoned": false, 00:04:41.363 "supported_io_types": { 00:04:41.363 "read": true, 00:04:41.363 "write": true, 00:04:41.363 "unmap": true, 00:04:41.363 "write_zeroes": true, 00:04:41.363 "flush": true, 00:04:41.363 "reset": true, 00:04:41.363 "compare": false, 00:04:41.363 "compare_and_write": false, 00:04:41.363 "abort": true, 00:04:41.363 "nvme_admin": false, 00:04:41.363 "nvme_io": false 00:04:41.363 }, 00:04:41.363 "memory_domains": [ 00:04:41.363 { 00:04:41.363 "dma_device_id": "system", 00:04:41.363 "dma_device_type": 1 00:04:41.363 }, 00:04:41.363 { 00:04:41.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.363 "dma_device_type": 2 00:04:41.363 } 00:04:41.363 ], 00:04:41.363 "driver_specific": { 00:04:41.363 "passthru": { 00:04:41.363 "name": "Passthru0", 00:04:41.363 "base_bdev_name": "Malloc2" 00:04:41.363 } 00:04:41.363 } 00:04:41.363 } 00:04:41.363 ]' 00:04:41.363 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.621 15:42:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.621 15:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.621 00:04:41.621 real 0m0.295s 00:04:41.621 user 0m0.178s 00:04:41.621 sys 0m0.049s 00:04:41.621 15:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.621 15:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.621 ************************************ 00:04:41.621 END TEST rpc_daemon_integrity 00:04:41.621 ************************************ 00:04:41.621 15:42:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.621 15:42:40 rpc -- rpc/rpc.sh@84 -- # killprocess 3561644 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@946 -- # '[' -z 3561644 ']' 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@950 -- # kill -0 3561644 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@951 -- # uname 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3561644 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3561644' 00:04:41.621 killing process with pid 3561644 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@965 -- # kill 3561644 00:04:41.621 15:42:40 rpc -- common/autotest_common.sh@970 -- # wait 3561644 00:04:42.187 00:04:42.187 real 0m2.653s 00:04:42.187 user 0m3.365s 00:04:42.187 sys 0m0.804s 00:04:42.187 15:42:40 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.187 15:42:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.187 ************************************ 00:04:42.187 END TEST rpc 00:04:42.187 ************************************ 00:04:42.187 15:42:40 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:42.187 15:42:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.187 15:42:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.187 15:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:42.187 ************************************ 00:04:42.187 START TEST skip_rpc 00:04:42.187 ************************************ 00:04:42.187 15:42:40 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:42.187 * Looking for test storage... 00:04:42.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.187 15:42:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.187 15:42:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:42.187 15:42:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:42.187 15:42:40 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.187 15:42:40 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.187 15:42:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.187 ************************************ 00:04:42.187 START TEST skip_rpc 00:04:42.187 ************************************ 00:04:42.187 15:42:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:42.187 15:42:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3562353 00:04:42.187 15:42:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.187 15:42:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:42.187 15:42:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:42.187 [2024-05-15 15:42:40.739641] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:04:42.187 [2024-05-15 15:42:40.739683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562353 ] 00:04:42.444 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.444 [2024-05-15 15:42:40.807554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.444 [2024-05-15 15:42:40.877114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:47.695 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3562353 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3562353 ']' 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3562353 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3562353 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3562353' 00:04:47.696 killing process with pid 3562353 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3562353 00:04:47.696 15:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3562353 00:04:47.696 00:04:47.696 real 0m5.404s 00:04:47.696 user 0m5.163s 00:04:47.696 sys 0m0.280s 00:04:47.696 15:42:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.696 15:42:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.696 ************************************ 00:04:47.696 END TEST skip_rpc 00:04:47.696 ************************************ 00:04:47.696 15:42:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:47.696 15:42:46 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.696 15:42:46 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.696 15:42:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.696 ************************************ 00:04:47.696 START TEST skip_rpc_with_json 00:04:47.696 ************************************ 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3563337 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3563337 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3563337 ']' 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.696 15:42:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.696 [2024-05-15 15:42:46.213258] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:04:47.696 [2024-05-15 15:42:46.213303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563337 ] 00:04:47.696 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.953 [2024-05-15 15:42:46.280755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.953 [2024-05-15 15:42:46.354986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.518 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:48.518 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:48.518 15:42:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.518 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.518 15:42:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.518 [2024-05-15 15:42:47.000131] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.518 request: 00:04:48.518 { 00:04:48.518 "trtype": "tcp", 00:04:48.518 "method": "nvmf_get_transports", 00:04:48.518 "req_id": 1 00:04:48.518 } 00:04:48.518 Got JSON-RPC error response 00:04:48.518 response: 00:04:48.518 { 00:04:48.518 "code": -19, 00:04:48.518 "message": "No such device" 00:04:48.518 } 00:04:48.518 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:48.518 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.518 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.518 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.518 [2024-05-15 15:42:47.008225] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.518 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.518 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.519 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.519 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.777 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.777 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.777 { 00:04:48.777 "subsystems": [ 00:04:48.777 { 00:04:48.777 "subsystem": "vfio_user_target", 00:04:48.777 "config": null 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "keyring", 00:04:48.777 "config": [] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "iobuf", 00:04:48.777 "config": [ 00:04:48.777 { 00:04:48.777 "method": "iobuf_set_options", 00:04:48.777 "params": { 00:04:48.777 "small_pool_count": 8192, 00:04:48.777 "large_pool_count": 1024, 00:04:48.777 "small_bufsize": 8192, 00:04:48.777 "large_bufsize": 135168 00:04:48.777 } 00:04:48.777 } 00:04:48.777 ] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "sock", 00:04:48.777 "config": [ 00:04:48.777 { 00:04:48.777 "method": "sock_impl_set_options", 00:04:48.777 "params": { 00:04:48.777 "impl_name": "posix", 00:04:48.777 "recv_buf_size": 2097152, 00:04:48.777 "send_buf_size": 2097152, 00:04:48.777 "enable_recv_pipe": true, 00:04:48.777 "enable_quickack": false, 00:04:48.777 "enable_placement_id": 0, 00:04:48.777 "enable_zerocopy_send_server": true, 00:04:48.777 "enable_zerocopy_send_client": false, 00:04:48.777 "zerocopy_threshold": 0, 00:04:48.777 "tls_version": 0, 00:04:48.777 "enable_ktls": false 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "sock_impl_set_options", 00:04:48.777 "params": { 00:04:48.777 "impl_name": "ssl", 00:04:48.777 "recv_buf_size": 4096, 00:04:48.777 "send_buf_size": 4096, 00:04:48.777 "enable_recv_pipe": true, 00:04:48.777 "enable_quickack": false, 00:04:48.777 "enable_placement_id": 0, 00:04:48.777 "enable_zerocopy_send_server": true, 00:04:48.777 "enable_zerocopy_send_client": false, 00:04:48.777 "zerocopy_threshold": 0, 00:04:48.777 "tls_version": 0, 00:04:48.777 "enable_ktls": false 00:04:48.777 } 00:04:48.777 } 00:04:48.777 ] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "vmd", 00:04:48.777 "config": [] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "accel", 00:04:48.777 "config": [ 00:04:48.777 { 00:04:48.777 "method": "accel_set_options", 00:04:48.777 "params": { 00:04:48.777 "small_cache_size": 128, 00:04:48.777 "large_cache_size": 16, 00:04:48.777 "task_count": 2048, 00:04:48.777 "sequence_count": 2048, 00:04:48.777 "buf_count": 2048 00:04:48.777 } 00:04:48.777 } 00:04:48.777 ] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "bdev", 00:04:48.777 "config": [ 00:04:48.777 { 00:04:48.777 "method": "bdev_set_options", 00:04:48.777 "params": { 00:04:48.777 "bdev_io_pool_size": 65535, 00:04:48.777 "bdev_io_cache_size": 256, 00:04:48.777 "bdev_auto_examine": true, 00:04:48.777 "iobuf_small_cache_size": 128, 00:04:48.777 "iobuf_large_cache_size": 16 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "bdev_raid_set_options", 00:04:48.777 "params": { 00:04:48.777 "process_window_size_kb": 1024 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "bdev_iscsi_set_options", 00:04:48.777 "params": { 00:04:48.777 "timeout_sec": 30 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "bdev_nvme_set_options", 00:04:48.777 "params": { 00:04:48.777 "action_on_timeout": "none", 00:04:48.777 "timeout_us": 0, 00:04:48.777 "timeout_admin_us": 0, 00:04:48.777 "keep_alive_timeout_ms": 10000, 00:04:48.777 "arbitration_burst": 0, 00:04:48.777 "low_priority_weight": 0, 00:04:48.777 "medium_priority_weight": 0, 00:04:48.777 "high_priority_weight": 0, 00:04:48.777 "nvme_adminq_poll_period_us": 10000, 00:04:48.777 "nvme_ioq_poll_period_us": 0, 00:04:48.777 "io_queue_requests": 0, 00:04:48.777 "delay_cmd_submit": true, 00:04:48.777 "transport_retry_count": 4, 00:04:48.777 "bdev_retry_count": 3, 00:04:48.777 "transport_ack_timeout": 0, 00:04:48.777 "ctrlr_loss_timeout_sec": 0, 00:04:48.777 "reconnect_delay_sec": 0, 00:04:48.777 "fast_io_fail_timeout_sec": 0, 00:04:48.777 "disable_auto_failback": false, 00:04:48.777 "generate_uuids": false, 00:04:48.777 "transport_tos": 0, 00:04:48.777 "nvme_error_stat": false, 00:04:48.777 "rdma_srq_size": 0, 00:04:48.777 "io_path_stat": false, 00:04:48.777 "allow_accel_sequence": false, 00:04:48.777 "rdma_max_cq_size": 0, 00:04:48.777 "rdma_cm_event_timeout_ms": 0, 00:04:48.777 "dhchap_digests": [ 00:04:48.777 "sha256", 00:04:48.777 "sha384", 00:04:48.777 "sha512" 00:04:48.777 ], 00:04:48.777 "dhchap_dhgroups": [ 00:04:48.777 "null", 00:04:48.777 "ffdhe2048", 00:04:48.777 "ffdhe3072", 00:04:48.777 "ffdhe4096", 00:04:48.777 "ffdhe6144", 00:04:48.777 "ffdhe8192" 00:04:48.777 ] 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "bdev_nvme_set_hotplug", 00:04:48.777 "params": { 00:04:48.777 "period_us": 100000, 00:04:48.777 "enable": false 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "bdev_wait_for_examine" 00:04:48.777 } 00:04:48.777 ] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "scsi", 00:04:48.777 "config": null 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "scheduler", 00:04:48.777 "config": [ 00:04:48.777 { 00:04:48.777 "method": "framework_set_scheduler", 00:04:48.777 "params": { 00:04:48.777 "name": "static" 00:04:48.777 } 00:04:48.777 } 00:04:48.777 ] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "vhost_scsi", 00:04:48.777 "config": [] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "vhost_blk", 00:04:48.777 "config": [] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "ublk", 00:04:48.777 "config": [] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "nbd", 00:04:48.777 "config": [] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "nvmf", 00:04:48.777 "config": [ 00:04:48.777 { 00:04:48.777 "method": "nvmf_set_config", 00:04:48.777 "params": { 00:04:48.777 "discovery_filter": "match_any", 00:04:48.777 "admin_cmd_passthru": { 00:04:48.777 "identify_ctrlr": false 00:04:48.777 } 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "nvmf_set_max_subsystems", 00:04:48.777 "params": { 00:04:48.777 "max_subsystems": 1024 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "nvmf_set_crdt", 00:04:48.777 "params": { 00:04:48.777 "crdt1": 0, 00:04:48.777 "crdt2": 0, 00:04:48.777 "crdt3": 0 00:04:48.777 } 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "method": "nvmf_create_transport", 00:04:48.777 "params": { 00:04:48.777 "trtype": "TCP", 00:04:48.777 "max_queue_depth": 128, 00:04:48.777 "max_io_qpairs_per_ctrlr": 127, 00:04:48.777 "in_capsule_data_size": 4096, 00:04:48.777 "max_io_size": 131072, 00:04:48.777 "io_unit_size": 131072, 00:04:48.777 "max_aq_depth": 128, 00:04:48.777 "num_shared_buffers": 511, 00:04:48.777 "buf_cache_size": 4294967295, 00:04:48.777 "dif_insert_or_strip": false, 00:04:48.777 "zcopy": false, 00:04:48.777 "c2h_success": true, 00:04:48.777 "sock_priority": 0, 00:04:48.777 "abort_timeout_sec": 1, 00:04:48.777 "ack_timeout": 0, 00:04:48.777 "data_wr_pool_size": 0 00:04:48.777 } 00:04:48.777 } 00:04:48.777 ] 00:04:48.777 }, 00:04:48.777 { 00:04:48.777 "subsystem": "iscsi", 00:04:48.777 "config": [ 00:04:48.777 { 00:04:48.777 "method": "iscsi_set_options", 00:04:48.777 "params": { 00:04:48.777 "node_base": "iqn.2016-06.io.spdk", 00:04:48.777 "max_sessions": 128, 00:04:48.778 "max_connections_per_session": 2, 00:04:48.778 "max_queue_depth": 64, 00:04:48.778 "default_time2wait": 2, 00:04:48.778 "default_time2retain": 20, 00:04:48.778 "first_burst_length": 8192, 00:04:48.778 "immediate_data": true, 00:04:48.778 "allow_duplicated_isid": false, 00:04:48.778 "error_recovery_level": 0, 00:04:48.778 "nop_timeout": 60, 00:04:48.778 "nop_in_interval": 30, 00:04:48.778 "disable_chap": false, 00:04:48.778 "require_chap": false, 00:04:48.778 "mutual_chap": false, 00:04:48.778 "chap_group": 0, 00:04:48.778 "max_large_datain_per_connection": 64, 00:04:48.778 "max_r2t_per_connection": 4, 00:04:48.778 "pdu_pool_size": 36864, 00:04:48.778 "immediate_data_pool_size": 16384, 00:04:48.778 "data_out_pool_size": 2048 00:04:48.778 } 00:04:48.778 } 00:04:48.778 ] 00:04:48.778 } 00:04:48.778 ] 00:04:48.778 } 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3563337 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3563337 ']' 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3563337 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3563337 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3563337' 00:04:48.778 killing process with pid 3563337 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3563337 00:04:48.778 15:42:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3563337 00:04:49.035 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3563547 00:04:49.035 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:49.035 15:42:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3563547 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3563547 ']' 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3563547 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3563547 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3563547' 00:04:54.292 killing process with pid 3563547 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3563547 00:04:54.292 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3563547 00:04:54.582 15:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.582 15:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.582 00:04:54.582 real 0m6.774s 00:04:54.582 user 0m6.524s 00:04:54.582 sys 0m0.667s 00:04:54.582 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.582 15:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 ************************************ 00:04:54.582 END TEST skip_rpc_with_json 00:04:54.582 ************************************ 00:04:54.582 15:42:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.582 15:42:52 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.582 15:42:52 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.582 15:42:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 ************************************ 00:04:54.582 START TEST skip_rpc_with_delay 00:04:54.582 ************************************ 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.582 [2024-05-15 15:42:53.070052] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.582 [2024-05-15 15:42:53.070112] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.582 00:04:54.582 real 0m0.064s 00:04:54.582 user 0m0.032s 00:04:54.582 sys 0m0.031s 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.582 15:42:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 ************************************ 00:04:54.582 END TEST skip_rpc_with_delay 00:04:54.582 ************************************ 00:04:54.582 15:42:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.854 15:42:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.854 15:42:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.854 15:42:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.854 15:42:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.854 15:42:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.854 ************************************ 00:04:54.854 START TEST exit_on_failed_rpc_init 00:04:54.854 ************************************ 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3564575 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3564575 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3564575 ']' 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.854 15:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.854 [2024-05-15 15:42:53.225310] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:04:54.855 [2024-05-15 15:42:53.225353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3564575 ] 00:04:54.855 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.855 [2024-05-15 15:42:53.294319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.855 [2024-05-15 15:42:53.367182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.789 [2024-05-15 15:42:54.087083] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:04:55.789 [2024-05-15 15:42:54.087136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3564829 ] 00:04:55.789 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.789 [2024-05-15 15:42:54.154325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.789 [2024-05-15 15:42:54.224732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.789 [2024-05-15 15:42:54.224801] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.789 [2024-05-15 15:42:54.224813] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.789 [2024-05-15 15:42:54.224821] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3564575 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3564575 ']' 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3564575 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:55.789 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3564575 00:04:56.047 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:56.047 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:56.047 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3564575' 00:04:56.047 killing process with pid 3564575 00:04:56.047 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3564575 00:04:56.047 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3564575 00:04:56.306 00:04:56.306 real 0m1.527s 00:04:56.306 user 0m1.751s 00:04:56.306 sys 0m0.437s 00:04:56.306 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.306 15:42:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.306 ************************************ 00:04:56.306 END TEST exit_on_failed_rpc_init 00:04:56.306 ************************************ 00:04:56.306 15:42:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.306 00:04:56.306 real 0m14.205s 00:04:56.306 user 0m13.625s 00:04:56.306 sys 0m1.708s 00:04:56.306 15:42:54 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.306 15:42:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.306 ************************************ 00:04:56.306 END TEST skip_rpc 00:04:56.306 ************************************ 00:04:56.306 15:42:54 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.306 15:42:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.306 15:42:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.306 15:42:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.306 ************************************ 00:04:56.306 START TEST rpc_client 00:04:56.306 ************************************ 00:04:56.306 15:42:54 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.564 * Looking for test storage... 00:04:56.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:56.564 15:42:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:56.564 OK 00:04:56.564 15:42:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.564 00:04:56.564 real 0m0.136s 00:04:56.564 user 0m0.056s 00:04:56.564 sys 0m0.089s 00:04:56.564 15:42:54 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.564 15:42:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.564 ************************************ 00:04:56.564 END TEST rpc_client 00:04:56.564 ************************************ 00:04:56.564 15:42:55 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.564 15:42:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.564 15:42:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.564 15:42:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.564 ************************************ 00:04:56.564 START TEST json_config 00:04:56.564 ************************************ 00:04:56.564 15:42:55 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.564 15:42:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.822 15:42:55 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.822 15:42:55 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.822 15:42:55 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.822 15:42:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.822 15:42:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.822 15:42:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.822 15:42:55 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.822 15:42:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@47 -- # : 0 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.822 15:42:55 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:56.822 INFO: JSON configuration test init 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:56.822 15:42:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:56.822 15:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.822 15:42:55 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:56.822 15:42:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:56.823 15:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.823 15:42:55 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:56.823 15:42:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.823 15:42:55 json_config -- json_config/common.sh@10 -- # shift 00:04:56.823 15:42:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.823 15:42:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.823 15:42:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.823 15:42:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.823 15:42:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.823 15:42:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3565001 00:04:56.823 15:42:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.823 Waiting for target to run... 00:04:56.823 15:42:55 json_config -- json_config/common.sh@25 -- # waitforlisten 3565001 /var/tmp/spdk_tgt.sock 00:04:56.823 15:42:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:56.823 15:42:55 json_config -- common/autotest_common.sh@827 -- # '[' -z 3565001 ']' 00:04:56.823 15:42:55 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.823 15:42:55 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.823 15:42:55 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.823 15:42:55 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.823 15:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.823 [2024-05-15 15:42:55.228877] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:04:56.823 [2024-05-15 15:42:55.228932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565001 ] 00:04:56.823 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.388 [2024-05-15 15:42:55.656072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.388 [2024-05-15 15:42:55.745387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.646 15:42:56 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.646 15:42:56 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:57.646 15:42:56 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.646 00:04:57.646 15:42:56 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:57.646 15:42:56 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:57.646 15:42:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:57.646 15:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.646 15:42:56 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:57.646 15:42:56 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:57.646 15:42:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.646 15:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.646 15:42:56 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:57.646 15:42:56 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:57.646 15:42:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:00.929 15:42:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:00.929 15:42:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:00.929 15:42:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:00.929 15:42:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.929 15:42:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:00.929 15:42:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:00.929 15:42:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:00.929 15:42:59 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.929 15:42:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.187 MallocForNvmf0 00:05:01.187 15:42:59 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.187 15:42:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.187 MallocForNvmf1 00:05:01.187 15:42:59 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.187 15:42:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.444 [2024-05-15 15:42:59.862240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.444 15:42:59 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.445 15:42:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.702 15:43:00 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.702 15:43:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.702 15:43:00 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.702 15:43:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.959 15:43:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.959 15:43:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.217 [2024-05-15 15:43:00.539997] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:02.217 [2024-05-15 15:43:00.540422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.217 15:43:00 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:02.217 15:43:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.217 15:43:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.217 15:43:00 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:02.217 15:43:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.217 15:43:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.217 15:43:00 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:02.217 15:43:00 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.217 15:43:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.513 MallocBdevForConfigChangeCheck 00:05:02.513 15:43:00 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:02.513 15:43:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.513 15:43:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.513 15:43:00 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:02.513 15:43:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.769 15:43:01 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:02.769 INFO: shutting down applications... 00:05:02.769 15:43:01 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:02.769 15:43:01 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:02.769 15:43:01 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:02.769 15:43:01 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:04.670 Calling clear_iscsi_subsystem 00:05:04.670 Calling clear_nvmf_subsystem 00:05:04.670 Calling clear_nbd_subsystem 00:05:04.670 Calling clear_ublk_subsystem 00:05:04.670 Calling clear_vhost_blk_subsystem 00:05:04.670 Calling clear_vhost_scsi_subsystem 00:05:04.670 Calling clear_bdev_subsystem 00:05:04.670 15:43:03 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:04.670 15:43:03 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:04.670 15:43:03 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:04.670 15:43:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.670 15:43:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:04.670 15:43:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:05.236 15:43:03 json_config -- json_config/json_config.sh@345 -- # break 00:05:05.236 15:43:03 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:05.236 15:43:03 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:05.236 15:43:03 json_config -- json_config/common.sh@31 -- # local app=target 00:05:05.236 15:43:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.236 15:43:03 json_config -- json_config/common.sh@35 -- # [[ -n 3565001 ]] 00:05:05.236 15:43:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3565001 00:05:05.236 [2024-05-15 15:43:03.507491] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:05.236 15:43:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.236 15:43:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.236 15:43:03 json_config -- json_config/common.sh@41 -- # kill -0 3565001 00:05:05.236 15:43:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.495 15:43:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.495 15:43:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.495 15:43:04 json_config -- json_config/common.sh@41 -- # kill -0 3565001 00:05:05.495 15:43:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.495 15:43:04 json_config -- json_config/common.sh@43 -- # break 00:05:05.495 15:43:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.495 15:43:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.495 SPDK target shutdown done 00:05:05.495 15:43:04 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:05.495 INFO: relaunching applications... 00:05:05.495 15:43:04 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.495 15:43:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:05.495 15:43:04 json_config -- json_config/common.sh@10 -- # shift 00:05:05.495 15:43:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.495 15:43:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.495 15:43:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.495 15:43:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.495 15:43:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.495 15:43:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3566691 00:05:05.495 15:43:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.495 Waiting for target to run... 00:05:05.495 15:43:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.495 15:43:04 json_config -- json_config/common.sh@25 -- # waitforlisten 3566691 /var/tmp/spdk_tgt.sock 00:05:05.495 15:43:04 json_config -- common/autotest_common.sh@827 -- # '[' -z 3566691 ']' 00:05:05.495 15:43:04 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.495 15:43:04 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:05.495 15:43:04 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.495 15:43:04 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:05.495 15:43:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.753 [2024-05-15 15:43:04.067020] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:05.753 [2024-05-15 15:43:04.067085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3566691 ] 00:05:05.753 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.012 [2024-05-15 15:43:04.513461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.271 [2024-05-15 15:43:04.597367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.558 [2024-05-15 15:43:07.616208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.558 [2024-05-15 15:43:07.648211] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:09.558 [2024-05-15 15:43:07.648621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.816 15:43:08 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:09.816 15:43:08 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:09.816 15:43:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.816 00:05:09.816 15:43:08 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:09.816 15:43:08 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:09.816 INFO: Checking if target configuration is the same... 00:05:09.816 15:43:08 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:09.816 15:43:08 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.816 15:43:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.816 + '[' 2 -ne 2 ']' 00:05:09.816 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.816 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.816 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.816 +++ basename /dev/fd/62 00:05:09.816 ++ mktemp /tmp/62.XXX 00:05:09.816 + tmp_file_1=/tmp/62.jy7 00:05:09.816 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.816 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.816 + tmp_file_2=/tmp/spdk_tgt_config.json.gqo 00:05:09.816 + ret=0 00:05:09.816 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.075 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.075 + diff -u /tmp/62.jy7 /tmp/spdk_tgt_config.json.gqo 00:05:10.075 + echo 'INFO: JSON config files are the same' 00:05:10.075 INFO: JSON config files are the same 00:05:10.075 + rm /tmp/62.jy7 /tmp/spdk_tgt_config.json.gqo 00:05:10.075 + exit 0 00:05:10.075 15:43:08 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:10.075 15:43:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:10.075 INFO: changing configuration and checking if this can be detected... 00:05:10.075 15:43:08 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.075 15:43:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.334 15:43:08 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.334 15:43:08 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:10.334 15:43:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.334 + '[' 2 -ne 2 ']' 00:05:10.334 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:10.334 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:10.334 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.334 +++ basename /dev/fd/62 00:05:10.334 ++ mktemp /tmp/62.XXX 00:05:10.334 + tmp_file_1=/tmp/62.9RC 00:05:10.334 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.334 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.334 + tmp_file_2=/tmp/spdk_tgt_config.json.eWu 00:05:10.334 + ret=0 00:05:10.334 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.592 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.592 + diff -u /tmp/62.9RC /tmp/spdk_tgt_config.json.eWu 00:05:10.592 + ret=1 00:05:10.592 + echo '=== Start of file: /tmp/62.9RC ===' 00:05:10.592 + cat /tmp/62.9RC 00:05:10.592 + echo '=== End of file: /tmp/62.9RC ===' 00:05:10.592 + echo '' 00:05:10.592 + echo '=== Start of file: /tmp/spdk_tgt_config.json.eWu ===' 00:05:10.592 + cat /tmp/spdk_tgt_config.json.eWu 00:05:10.592 + echo '=== End of file: /tmp/spdk_tgt_config.json.eWu ===' 00:05:10.592 + echo '' 00:05:10.592 + rm /tmp/62.9RC /tmp/spdk_tgt_config.json.eWu 00:05:10.592 + exit 1 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:10.592 INFO: configuration change detected. 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:10.592 15:43:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.592 15:43:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@317 -- # [[ -n 3566691 ]] 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:10.592 15:43:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.592 15:43:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:10.592 15:43:09 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:10.592 15:43:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.592 15:43:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.851 15:43:09 json_config -- json_config/json_config.sh@323 -- # killprocess 3566691 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@946 -- # '[' -z 3566691 ']' 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@950 -- # kill -0 3566691 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@951 -- # uname 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3566691 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3566691' 00:05:10.851 killing process with pid 3566691 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@965 -- # kill 3566691 00:05:10.851 [2024-05-15 15:43:09.230899] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:10.851 15:43:09 json_config -- common/autotest_common.sh@970 -- # wait 3566691 00:05:13.413 15:43:11 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.413 15:43:11 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:13.413 15:43:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.413 15:43:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.414 15:43:11 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:13.414 15:43:11 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:13.414 INFO: Success 00:05:13.414 00:05:13.414 real 0m16.346s 00:05:13.414 user 0m16.698s 00:05:13.414 sys 0m2.332s 00:05:13.414 15:43:11 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.414 15:43:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.414 ************************************ 00:05:13.414 END TEST json_config 00:05:13.414 ************************************ 00:05:13.414 15:43:11 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.414 15:43:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.414 15:43:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.414 15:43:11 -- common/autotest_common.sh@10 -- # set +x 00:05:13.414 ************************************ 00:05:13.414 START TEST json_config_extra_key 00:05:13.414 ************************************ 00:05:13.414 15:43:11 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.414 15:43:11 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.414 15:43:11 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.414 15:43:11 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.414 15:43:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.414 15:43:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.414 15:43:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.414 15:43:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.414 15:43:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:13.414 15:43:11 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.414 INFO: launching applications... 00:05:13.414 15:43:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3568139 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.414 Waiting for target to run... 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3568139 /var/tmp/spdk_tgt.sock 00:05:13.414 15:43:11 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3568139 ']' 00:05:13.414 15:43:11 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.414 15:43:11 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.414 15:43:11 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.414 15:43:11 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.414 15:43:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.414 15:43:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.414 [2024-05-15 15:43:11.606115] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:13.414 [2024-05-15 15:43:11.606164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568139 ] 00:05:13.414 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.414 [2024-05-15 15:43:11.892399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.414 [2024-05-15 15:43:11.955116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.980 15:43:12 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:13.980 15:43:12 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:13.980 00:05:13.980 15:43:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:13.980 INFO: shutting down applications... 00:05:13.980 15:43:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3568139 ]] 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3568139 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3568139 00:05:13.980 15:43:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.545 15:43:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.545 15:43:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.545 15:43:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3568139 00:05:14.545 15:43:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.545 15:43:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:14.545 15:43:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.545 15:43:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.545 SPDK target shutdown done 00:05:14.545 15:43:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:14.545 Success 00:05:14.545 00:05:14.545 real 0m1.405s 00:05:14.545 user 0m1.159s 00:05:14.545 sys 0m0.396s 00:05:14.545 15:43:12 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.545 15:43:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.545 ************************************ 00:05:14.545 END TEST json_config_extra_key 00:05:14.545 ************************************ 00:05:14.545 15:43:12 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.545 15:43:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.545 15:43:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.545 15:43:12 -- common/autotest_common.sh@10 -- # set +x 00:05:14.545 ************************************ 00:05:14.545 START TEST alias_rpc 00:05:14.545 ************************************ 00:05:14.545 15:43:12 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.545 * Looking for test storage... 00:05:14.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:14.545 15:43:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:14.545 15:43:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3568456 00:05:14.545 15:43:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3568456 00:05:14.545 15:43:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.545 15:43:13 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3568456 ']' 00:05:14.545 15:43:13 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.545 15:43:13 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.545 15:43:13 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.545 15:43:13 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.545 15:43:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.803 [2024-05-15 15:43:13.134661] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:14.803 [2024-05-15 15:43:13.134708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568456 ] 00:05:14.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.803 [2024-05-15 15:43:13.202500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.803 [2024-05-15 15:43:13.272492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.367 15:43:13 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:15.367 15:43:13 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:15.367 15:43:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:15.625 15:43:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3568456 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3568456 ']' 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3568456 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3568456 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3568456' 00:05:15.625 killing process with pid 3568456 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@965 -- # kill 3568456 00:05:15.625 15:43:14 alias_rpc -- common/autotest_common.sh@970 -- # wait 3568456 00:05:16.190 00:05:16.190 real 0m1.529s 00:05:16.190 user 0m1.622s 00:05:16.190 sys 0m0.447s 00:05:16.190 15:43:14 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.190 15:43:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.190 ************************************ 00:05:16.190 END TEST alias_rpc 00:05:16.190 ************************************ 00:05:16.190 15:43:14 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:16.190 15:43:14 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.190 15:43:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.191 15:43:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.191 15:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:16.191 ************************************ 00:05:16.191 START TEST spdkcli_tcp 00:05:16.191 ************************************ 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.191 * Looking for test storage... 00:05:16.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3568779 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3568779 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3568779 ']' 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.191 15:43:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.191 15:43:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:16.191 [2024-05-15 15:43:14.740902] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:16.191 [2024-05-15 15:43:14.740956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568779 ] 00:05:16.448 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.448 [2024-05-15 15:43:14.809276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.448 [2024-05-15 15:43:14.884881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.448 [2024-05-15 15:43:14.884885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.012 15:43:15 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.012 15:43:15 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:17.012 15:43:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3569037 00:05:17.012 15:43:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:17.012 15:43:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:17.270 [ 00:05:17.270 "bdev_malloc_delete", 00:05:17.270 "bdev_malloc_create", 00:05:17.270 "bdev_null_resize", 00:05:17.270 "bdev_null_delete", 00:05:17.270 "bdev_null_create", 00:05:17.270 "bdev_nvme_cuse_unregister", 00:05:17.270 "bdev_nvme_cuse_register", 00:05:17.270 "bdev_opal_new_user", 00:05:17.270 "bdev_opal_set_lock_state", 00:05:17.270 "bdev_opal_delete", 00:05:17.270 "bdev_opal_get_info", 00:05:17.270 "bdev_opal_create", 00:05:17.270 "bdev_nvme_opal_revert", 00:05:17.270 "bdev_nvme_opal_init", 00:05:17.270 "bdev_nvme_send_cmd", 00:05:17.270 "bdev_nvme_get_path_iostat", 00:05:17.270 "bdev_nvme_get_mdns_discovery_info", 00:05:17.270 "bdev_nvme_stop_mdns_discovery", 00:05:17.270 "bdev_nvme_start_mdns_discovery", 00:05:17.270 "bdev_nvme_set_multipath_policy", 00:05:17.270 "bdev_nvme_set_preferred_path", 00:05:17.270 "bdev_nvme_get_io_paths", 00:05:17.270 "bdev_nvme_remove_error_injection", 00:05:17.270 "bdev_nvme_add_error_injection", 00:05:17.270 "bdev_nvme_get_discovery_info", 00:05:17.270 "bdev_nvme_stop_discovery", 00:05:17.270 "bdev_nvme_start_discovery", 00:05:17.270 "bdev_nvme_get_controller_health_info", 00:05:17.270 "bdev_nvme_disable_controller", 00:05:17.270 "bdev_nvme_enable_controller", 00:05:17.270 "bdev_nvme_reset_controller", 00:05:17.270 "bdev_nvme_get_transport_statistics", 00:05:17.270 "bdev_nvme_apply_firmware", 00:05:17.270 "bdev_nvme_detach_controller", 00:05:17.270 "bdev_nvme_get_controllers", 00:05:17.270 "bdev_nvme_attach_controller", 00:05:17.270 "bdev_nvme_set_hotplug", 00:05:17.270 "bdev_nvme_set_options", 00:05:17.270 "bdev_passthru_delete", 00:05:17.270 "bdev_passthru_create", 00:05:17.270 "bdev_lvol_check_shallow_copy", 00:05:17.270 "bdev_lvol_start_shallow_copy", 00:05:17.270 "bdev_lvol_grow_lvstore", 00:05:17.270 "bdev_lvol_get_lvols", 00:05:17.270 "bdev_lvol_get_lvstores", 00:05:17.270 "bdev_lvol_delete", 00:05:17.270 "bdev_lvol_set_read_only", 00:05:17.270 "bdev_lvol_resize", 00:05:17.270 "bdev_lvol_decouple_parent", 00:05:17.270 "bdev_lvol_inflate", 00:05:17.270 "bdev_lvol_rename", 00:05:17.270 "bdev_lvol_clone_bdev", 00:05:17.270 "bdev_lvol_clone", 00:05:17.270 "bdev_lvol_snapshot", 00:05:17.270 "bdev_lvol_create", 00:05:17.270 "bdev_lvol_delete_lvstore", 00:05:17.270 "bdev_lvol_rename_lvstore", 00:05:17.270 "bdev_lvol_create_lvstore", 00:05:17.270 "bdev_raid_set_options", 00:05:17.270 "bdev_raid_remove_base_bdev", 00:05:17.270 "bdev_raid_add_base_bdev", 00:05:17.270 "bdev_raid_delete", 00:05:17.270 "bdev_raid_create", 00:05:17.270 "bdev_raid_get_bdevs", 00:05:17.270 "bdev_error_inject_error", 00:05:17.270 "bdev_error_delete", 00:05:17.270 "bdev_error_create", 00:05:17.270 "bdev_split_delete", 00:05:17.270 "bdev_split_create", 00:05:17.270 "bdev_delay_delete", 00:05:17.270 "bdev_delay_create", 00:05:17.270 "bdev_delay_update_latency", 00:05:17.270 "bdev_zone_block_delete", 00:05:17.270 "bdev_zone_block_create", 00:05:17.270 "blobfs_create", 00:05:17.270 "blobfs_detect", 00:05:17.270 "blobfs_set_cache_size", 00:05:17.270 "bdev_aio_delete", 00:05:17.270 "bdev_aio_rescan", 00:05:17.270 "bdev_aio_create", 00:05:17.270 "bdev_ftl_set_property", 00:05:17.270 "bdev_ftl_get_properties", 00:05:17.270 "bdev_ftl_get_stats", 00:05:17.270 "bdev_ftl_unmap", 00:05:17.270 "bdev_ftl_unload", 00:05:17.270 "bdev_ftl_delete", 00:05:17.270 "bdev_ftl_load", 00:05:17.270 "bdev_ftl_create", 00:05:17.270 "bdev_virtio_attach_controller", 00:05:17.270 "bdev_virtio_scsi_get_devices", 00:05:17.270 "bdev_virtio_detach_controller", 00:05:17.270 "bdev_virtio_blk_set_hotplug", 00:05:17.270 "bdev_iscsi_delete", 00:05:17.270 "bdev_iscsi_create", 00:05:17.270 "bdev_iscsi_set_options", 00:05:17.270 "accel_error_inject_error", 00:05:17.270 "ioat_scan_accel_module", 00:05:17.270 "dsa_scan_accel_module", 00:05:17.270 "iaa_scan_accel_module", 00:05:17.270 "vfu_virtio_create_scsi_endpoint", 00:05:17.271 "vfu_virtio_scsi_remove_target", 00:05:17.271 "vfu_virtio_scsi_add_target", 00:05:17.271 "vfu_virtio_create_blk_endpoint", 00:05:17.271 "vfu_virtio_delete_endpoint", 00:05:17.271 "keyring_file_remove_key", 00:05:17.271 "keyring_file_add_key", 00:05:17.271 "iscsi_get_histogram", 00:05:17.271 "iscsi_enable_histogram", 00:05:17.271 "iscsi_set_options", 00:05:17.271 "iscsi_get_auth_groups", 00:05:17.271 "iscsi_auth_group_remove_secret", 00:05:17.271 "iscsi_auth_group_add_secret", 00:05:17.271 "iscsi_delete_auth_group", 00:05:17.271 "iscsi_create_auth_group", 00:05:17.271 "iscsi_set_discovery_auth", 00:05:17.271 "iscsi_get_options", 00:05:17.271 "iscsi_target_node_request_logout", 00:05:17.271 "iscsi_target_node_set_redirect", 00:05:17.271 "iscsi_target_node_set_auth", 00:05:17.271 "iscsi_target_node_add_lun", 00:05:17.271 "iscsi_get_stats", 00:05:17.271 "iscsi_get_connections", 00:05:17.271 "iscsi_portal_group_set_auth", 00:05:17.271 "iscsi_start_portal_group", 00:05:17.271 "iscsi_delete_portal_group", 00:05:17.271 "iscsi_create_portal_group", 00:05:17.271 "iscsi_get_portal_groups", 00:05:17.271 "iscsi_delete_target_node", 00:05:17.271 "iscsi_target_node_remove_pg_ig_maps", 00:05:17.271 "iscsi_target_node_add_pg_ig_maps", 00:05:17.271 "iscsi_create_target_node", 00:05:17.271 "iscsi_get_target_nodes", 00:05:17.271 "iscsi_delete_initiator_group", 00:05:17.271 "iscsi_initiator_group_remove_initiators", 00:05:17.271 "iscsi_initiator_group_add_initiators", 00:05:17.271 "iscsi_create_initiator_group", 00:05:17.271 "iscsi_get_initiator_groups", 00:05:17.271 "nvmf_set_crdt", 00:05:17.271 "nvmf_set_config", 00:05:17.271 "nvmf_set_max_subsystems", 00:05:17.271 "nvmf_stop_mdns_prr", 00:05:17.271 "nvmf_publish_mdns_prr", 00:05:17.271 "nvmf_subsystem_get_listeners", 00:05:17.271 "nvmf_subsystem_get_qpairs", 00:05:17.271 "nvmf_subsystem_get_controllers", 00:05:17.271 "nvmf_get_stats", 00:05:17.271 "nvmf_get_transports", 00:05:17.271 "nvmf_create_transport", 00:05:17.271 "nvmf_get_targets", 00:05:17.271 "nvmf_delete_target", 00:05:17.271 "nvmf_create_target", 00:05:17.271 "nvmf_subsystem_allow_any_host", 00:05:17.271 "nvmf_subsystem_remove_host", 00:05:17.271 "nvmf_subsystem_add_host", 00:05:17.271 "nvmf_ns_remove_host", 00:05:17.271 "nvmf_ns_add_host", 00:05:17.271 "nvmf_subsystem_remove_ns", 00:05:17.271 "nvmf_subsystem_add_ns", 00:05:17.271 "nvmf_subsystem_listener_set_ana_state", 00:05:17.271 "nvmf_discovery_get_referrals", 00:05:17.271 "nvmf_discovery_remove_referral", 00:05:17.271 "nvmf_discovery_add_referral", 00:05:17.271 "nvmf_subsystem_remove_listener", 00:05:17.271 "nvmf_subsystem_add_listener", 00:05:17.271 "nvmf_delete_subsystem", 00:05:17.271 "nvmf_create_subsystem", 00:05:17.271 "nvmf_get_subsystems", 00:05:17.271 "env_dpdk_get_mem_stats", 00:05:17.271 "nbd_get_disks", 00:05:17.271 "nbd_stop_disk", 00:05:17.271 "nbd_start_disk", 00:05:17.271 "ublk_recover_disk", 00:05:17.271 "ublk_get_disks", 00:05:17.271 "ublk_stop_disk", 00:05:17.271 "ublk_start_disk", 00:05:17.271 "ublk_destroy_target", 00:05:17.271 "ublk_create_target", 00:05:17.271 "virtio_blk_create_transport", 00:05:17.271 "virtio_blk_get_transports", 00:05:17.271 "vhost_controller_set_coalescing", 00:05:17.271 "vhost_get_controllers", 00:05:17.271 "vhost_delete_controller", 00:05:17.271 "vhost_create_blk_controller", 00:05:17.271 "vhost_scsi_controller_remove_target", 00:05:17.271 "vhost_scsi_controller_add_target", 00:05:17.271 "vhost_start_scsi_controller", 00:05:17.271 "vhost_create_scsi_controller", 00:05:17.271 "thread_set_cpumask", 00:05:17.271 "framework_get_scheduler", 00:05:17.271 "framework_set_scheduler", 00:05:17.271 "framework_get_reactors", 00:05:17.271 "thread_get_io_channels", 00:05:17.271 "thread_get_pollers", 00:05:17.271 "thread_get_stats", 00:05:17.271 "framework_monitor_context_switch", 00:05:17.271 "spdk_kill_instance", 00:05:17.271 "log_enable_timestamps", 00:05:17.271 "log_get_flags", 00:05:17.271 "log_clear_flag", 00:05:17.271 "log_set_flag", 00:05:17.271 "log_get_level", 00:05:17.271 "log_set_level", 00:05:17.271 "log_get_print_level", 00:05:17.271 "log_set_print_level", 00:05:17.271 "framework_enable_cpumask_locks", 00:05:17.271 "framework_disable_cpumask_locks", 00:05:17.271 "framework_wait_init", 00:05:17.271 "framework_start_init", 00:05:17.271 "scsi_get_devices", 00:05:17.271 "bdev_get_histogram", 00:05:17.271 "bdev_enable_histogram", 00:05:17.271 "bdev_set_qos_limit", 00:05:17.271 "bdev_set_qd_sampling_period", 00:05:17.271 "bdev_get_bdevs", 00:05:17.271 "bdev_reset_iostat", 00:05:17.271 "bdev_get_iostat", 00:05:17.271 "bdev_examine", 00:05:17.271 "bdev_wait_for_examine", 00:05:17.271 "bdev_set_options", 00:05:17.271 "notify_get_notifications", 00:05:17.271 "notify_get_types", 00:05:17.271 "accel_get_stats", 00:05:17.271 "accel_set_options", 00:05:17.271 "accel_set_driver", 00:05:17.271 "accel_crypto_key_destroy", 00:05:17.271 "accel_crypto_keys_get", 00:05:17.271 "accel_crypto_key_create", 00:05:17.271 "accel_assign_opc", 00:05:17.271 "accel_get_module_info", 00:05:17.271 "accel_get_opc_assignments", 00:05:17.271 "vmd_rescan", 00:05:17.271 "vmd_remove_device", 00:05:17.271 "vmd_enable", 00:05:17.271 "sock_get_default_impl", 00:05:17.271 "sock_set_default_impl", 00:05:17.271 "sock_impl_set_options", 00:05:17.271 "sock_impl_get_options", 00:05:17.271 "iobuf_get_stats", 00:05:17.271 "iobuf_set_options", 00:05:17.271 "keyring_get_keys", 00:05:17.271 "framework_get_pci_devices", 00:05:17.271 "framework_get_config", 00:05:17.271 "framework_get_subsystems", 00:05:17.271 "vfu_tgt_set_base_path", 00:05:17.271 "trace_get_info", 00:05:17.271 "trace_get_tpoint_group_mask", 00:05:17.271 "trace_disable_tpoint_group", 00:05:17.271 "trace_enable_tpoint_group", 00:05:17.271 "trace_clear_tpoint_mask", 00:05:17.271 "trace_set_tpoint_mask", 00:05:17.271 "spdk_get_version", 00:05:17.271 "rpc_get_methods" 00:05:17.271 ] 00:05:17.271 15:43:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.271 15:43:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:17.271 15:43:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3568779 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3568779 ']' 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3568779 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3568779 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3568779' 00:05:17.271 killing process with pid 3568779 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3568779 00:05:17.271 15:43:15 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3568779 00:05:17.837 00:05:17.837 real 0m1.518s 00:05:17.837 user 0m2.726s 00:05:17.837 sys 0m0.487s 00:05:17.837 15:43:16 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.837 15:43:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.837 ************************************ 00:05:17.838 END TEST spdkcli_tcp 00:05:17.838 ************************************ 00:05:17.838 15:43:16 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.838 15:43:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.838 15:43:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.838 15:43:16 -- common/autotest_common.sh@10 -- # set +x 00:05:17.838 ************************************ 00:05:17.838 START TEST dpdk_mem_utility 00:05:17.838 ************************************ 00:05:17.838 15:43:16 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.838 * Looking for test storage... 00:05:17.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:17.838 15:43:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:17.838 15:43:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3569144 00:05:17.838 15:43:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3569144 00:05:17.838 15:43:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.838 15:43:16 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3569144 ']' 00:05:17.838 15:43:16 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.838 15:43:16 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.838 15:43:16 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.838 15:43:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.838 15:43:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.838 [2024-05-15 15:43:16.359543] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:17.838 [2024-05-15 15:43:16.359597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569144 ] 00:05:17.838 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.096 [2024-05-15 15:43:16.428781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.096 [2024-05-15 15:43:16.502592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.661 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.661 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:18.661 15:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.661 15:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.661 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.661 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.661 { 00:05:18.661 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.661 } 00:05:18.661 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.661 15:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:18.661 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:18.661 1 heaps totaling size 814.000000 MiB 00:05:18.661 size: 814.000000 MiB heap id: 0 00:05:18.661 end heaps---------- 00:05:18.661 8 mempools totaling size 598.116089 MiB 00:05:18.661 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:18.661 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:18.661 size: 84.521057 MiB name: bdev_io_3569144 00:05:18.661 size: 51.011292 MiB name: evtpool_3569144 00:05:18.661 size: 50.003479 MiB name: msgpool_3569144 00:05:18.661 size: 21.763794 MiB name: PDU_Pool 00:05:18.661 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:18.661 size: 0.026123 MiB name: Session_Pool 00:05:18.661 end mempools------- 00:05:18.661 6 memzones totaling size 4.142822 MiB 00:05:18.662 size: 1.000366 MiB name: RG_ring_0_3569144 00:05:18.662 size: 1.000366 MiB name: RG_ring_1_3569144 00:05:18.662 size: 1.000366 MiB name: RG_ring_4_3569144 00:05:18.662 size: 1.000366 MiB name: RG_ring_5_3569144 00:05:18.662 size: 0.125366 MiB name: RG_ring_2_3569144 00:05:18.662 size: 0.015991 MiB name: RG_ring_3_3569144 00:05:18.662 end memzones------- 00:05:18.662 15:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:18.920 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:18.920 list of free elements. size: 12.519348 MiB 00:05:18.920 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:18.920 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:18.920 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:18.920 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:18.920 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:18.920 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:18.920 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:18.920 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:18.920 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:18.920 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:18.920 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:18.920 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:18.920 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:18.920 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:18.920 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:18.920 list of standard malloc elements. size: 199.218079 MiB 00:05:18.920 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:18.920 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:18.920 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:18.920 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:18.920 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:18.920 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:18.920 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:18.920 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:18.920 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:18.920 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:18.920 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:18.920 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:18.920 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:18.920 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:18.920 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:18.920 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:18.920 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:18.920 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:18.920 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:18.920 list of memzone associated elements. size: 602.262573 MiB 00:05:18.920 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:18.920 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:18.920 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:18.920 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:18.920 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:18.920 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3569144_0 00:05:18.920 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:18.920 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3569144_0 00:05:18.920 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:18.920 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3569144_0 00:05:18.920 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:18.920 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:18.920 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:18.920 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:18.920 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:18.920 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3569144 00:05:18.920 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:18.921 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3569144 00:05:18.921 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:18.921 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3569144 00:05:18.921 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:18.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:18.921 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:18.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:18.921 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:18.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:18.921 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:18.921 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:18.921 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:18.921 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3569144 00:05:18.921 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:18.921 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3569144 00:05:18.921 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:18.921 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3569144 00:05:18.921 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:18.921 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3569144 00:05:18.921 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:18.921 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3569144 00:05:18.921 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:18.921 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:18.921 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:18.921 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:18.921 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:18.921 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:18.921 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:18.921 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3569144 00:05:18.921 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:18.921 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:18.921 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:18.921 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:18.921 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:18.921 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3569144 00:05:18.921 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:18.921 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:18.921 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:18.921 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3569144 00:05:18.921 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:18.921 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3569144 00:05:18.921 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:18.921 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:18.921 15:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:18.921 15:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3569144 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3569144 ']' 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3569144 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3569144 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3569144' 00:05:18.921 killing process with pid 3569144 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3569144 00:05:18.921 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3569144 00:05:19.180 00:05:19.180 real 0m1.456s 00:05:19.180 user 0m1.494s 00:05:19.180 sys 0m0.452s 00:05:19.180 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.180 15:43:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.180 ************************************ 00:05:19.180 END TEST dpdk_mem_utility 00:05:19.180 ************************************ 00:05:19.180 15:43:17 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.180 15:43:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.180 15:43:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.180 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.180 ************************************ 00:05:19.180 START TEST event 00:05:19.180 ************************************ 00:05:19.180 15:43:17 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.438 * Looking for test storage... 00:05:19.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:19.438 15:43:17 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:19.438 15:43:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:19.438 15:43:17 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.438 15:43:17 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:19.438 15:43:17 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.438 15:43:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.438 ************************************ 00:05:19.438 START TEST event_perf 00:05:19.438 ************************************ 00:05:19.438 15:43:17 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.438 Running I/O for 1 seconds...[2024-05-15 15:43:17.912941] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:19.438 [2024-05-15 15:43:17.913026] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569483 ] 00:05:19.438 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.438 [2024-05-15 15:43:17.987245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.695 [2024-05-15 15:43:18.060978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.695 [2024-05-15 15:43:18.061073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.695 [2024-05-15 15:43:18.061156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.695 [2024-05-15 15:43:18.061159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.627 Running I/O for 1 seconds... 00:05:20.627 lcore 0: 206286 00:05:20.627 lcore 1: 206285 00:05:20.627 lcore 2: 206286 00:05:20.627 lcore 3: 206285 00:05:20.627 done. 00:05:20.627 00:05:20.627 real 0m1.256s 00:05:20.627 user 0m4.164s 00:05:20.628 sys 0m0.088s 00:05:20.628 15:43:19 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.628 15:43:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.628 ************************************ 00:05:20.628 END TEST event_perf 00:05:20.628 ************************************ 00:05:20.628 15:43:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.628 15:43:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:20.628 15:43:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.628 15:43:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.886 ************************************ 00:05:20.886 START TEST event_reactor 00:05:20.886 ************************************ 00:05:20.886 15:43:19 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.886 [2024-05-15 15:43:19.255031] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:20.886 [2024-05-15 15:43:19.255095] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569730 ] 00:05:20.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.886 [2024-05-15 15:43:19.328685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.886 [2024-05-15 15:43:19.394476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.257 test_start 00:05:22.257 oneshot 00:05:22.257 tick 100 00:05:22.257 tick 100 00:05:22.257 tick 250 00:05:22.257 tick 100 00:05:22.257 tick 100 00:05:22.257 tick 100 00:05:22.257 tick 250 00:05:22.257 tick 500 00:05:22.257 tick 100 00:05:22.257 tick 100 00:05:22.257 tick 250 00:05:22.257 tick 100 00:05:22.257 tick 100 00:05:22.257 test_end 00:05:22.257 00:05:22.257 real 0m1.249s 00:05:22.257 user 0m1.156s 00:05:22.257 sys 0m0.088s 00:05:22.257 15:43:20 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.257 15:43:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.257 ************************************ 00:05:22.257 END TEST event_reactor 00:05:22.257 ************************************ 00:05:22.257 15:43:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.257 15:43:20 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:22.257 15:43:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.257 15:43:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.257 ************************************ 00:05:22.257 START TEST event_reactor_perf 00:05:22.257 ************************************ 00:05:22.257 15:43:20 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.257 [2024-05-15 15:43:20.590342] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:22.257 [2024-05-15 15:43:20.590421] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570014 ] 00:05:22.257 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.257 [2024-05-15 15:43:20.662688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.257 [2024-05-15 15:43:20.731873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.629 test_start 00:05:23.629 test_end 00:05:23.629 Performance: 513253 events per second 00:05:23.629 00:05:23.629 real 0m1.251s 00:05:23.629 user 0m1.161s 00:05:23.629 sys 0m0.086s 00:05:23.629 15:43:21 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.629 15:43:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.629 ************************************ 00:05:23.629 END TEST event_reactor_perf 00:05:23.629 ************************************ 00:05:23.629 15:43:21 event -- event/event.sh@49 -- # uname -s 00:05:23.629 15:43:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.629 15:43:21 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.629 15:43:21 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.629 15:43:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.629 15:43:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.629 ************************************ 00:05:23.629 START TEST event_scheduler 00:05:23.629 ************************************ 00:05:23.629 15:43:21 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.629 * Looking for test storage... 00:05:23.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:23.629 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:23.629 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3570323 00:05:23.629 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.629 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:23.629 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3570323 00:05:23.629 15:43:22 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3570323 ']' 00:05:23.629 15:43:22 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.630 15:43:22 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.630 15:43:22 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.630 15:43:22 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.630 15:43:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.630 [2024-05-15 15:43:22.052457] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:23.630 [2024-05-15 15:43:22.052502] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570323 ] 00:05:23.630 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.630 [2024-05-15 15:43:22.117286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.630 [2024-05-15 15:43:22.192556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.630 [2024-05-15 15:43:22.192640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.630 [2024-05-15 15:43:22.192661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.630 [2024-05-15 15:43:22.192664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:24.560 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 POWER: Env isn't set yet! 00:05:24.560 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:24.560 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.560 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.560 POWER: Attempting to initialise PSTAT power management... 00:05:24.560 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:24.560 POWER: Initialized successfully for lcore 0 power management 00:05:24.560 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:24.560 POWER: Initialized successfully for lcore 1 power management 00:05:24.560 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:24.560 POWER: Initialized successfully for lcore 2 power management 00:05:24.560 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:24.560 POWER: Initialized successfully for lcore 3 power management 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.560 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 [2024-05-15 15:43:22.980799] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.560 15:43:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.560 15:43:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 ************************************ 00:05:24.560 START TEST scheduler_create_thread 00:05:24.560 ************************************ 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 2 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 3 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 4 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.560 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.561 5 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.561 6 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.561 7 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.561 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.561 8 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.819 9 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.819 10 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.819 15:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.218 15:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.218 15:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.218 15:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.218 15:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.218 15:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.801 15:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.801 15:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.801 15:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.801 15:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.732 15:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.732 15:43:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.732 15:43:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.732 15:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.732 15:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.662 15:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.662 00:05:28.662 real 0m3.894s 00:05:28.662 user 0m0.026s 00:05:28.662 sys 0m0.005s 00:05:28.662 15:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.662 15:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.662 ************************************ 00:05:28.662 END TEST scheduler_create_thread 00:05:28.662 ************************************ 00:05:28.662 15:43:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.662 15:43:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3570323 00:05:28.662 15:43:26 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3570323 ']' 00:05:28.662 15:43:26 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3570323 00:05:28.662 15:43:26 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:28.662 15:43:26 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.662 15:43:26 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3570323 00:05:28.662 15:43:27 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:28.662 15:43:27 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:28.662 15:43:27 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3570323' 00:05:28.662 killing process with pid 3570323 00:05:28.662 15:43:27 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3570323 00:05:28.662 15:43:27 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3570323 00:05:28.920 [2024-05-15 15:43:27.304583] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.179 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:29.179 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:29.179 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:29.179 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:29.179 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:29.179 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:29.179 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:29.179 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:29.179 00:05:29.179 real 0m5.698s 00:05:29.179 user 0m12.240s 00:05:29.179 sys 0m0.429s 00:05:29.179 15:43:27 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.179 15:43:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.179 ************************************ 00:05:29.179 END TEST event_scheduler 00:05:29.179 ************************************ 00:05:29.179 15:43:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.179 15:43:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.179 15:43:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.179 15:43:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.179 15:43:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.179 ************************************ 00:05:29.179 START TEST app_repeat 00:05:29.179 ************************************ 00:05:29.179 15:43:27 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3571440 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3571440' 00:05:29.179 Process app_repeat pid: 3571440 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.179 spdk_app_start Round 0 00:05:29.179 15:43:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3571440 /var/tmp/spdk-nbd.sock 00:05:29.179 15:43:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3571440 ']' 00:05:29.179 15:43:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.179 15:43:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.179 15:43:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.179 15:43:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.179 15:43:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.179 [2024-05-15 15:43:27.731843] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:29.179 [2024-05-15 15:43:27.731903] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3571440 ] 00:05:29.437 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.437 [2024-05-15 15:43:27.801665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.437 [2024-05-15 15:43:27.870514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.437 [2024-05-15 15:43:27.870517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.001 15:43:28 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.001 15:43:28 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:30.001 15:43:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.259 Malloc0 00:05:30.259 15:43:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.516 Malloc1 00:05:30.516 15:43:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.516 15:43:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.774 /dev/nbd0 00:05:30.774 15:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.774 15:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.774 1+0 records in 00:05:30.774 1+0 records out 00:05:30.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216136 s, 19.0 MB/s 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:30.774 15:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.774 15:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.774 15:43:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.774 /dev/nbd1 00:05:30.774 15:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.774 15:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.774 1+0 records in 00:05:30.774 1+0 records out 00:05:30.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270609 s, 15.1 MB/s 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:30.774 15:43:29 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.031 15:43:29 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:31.031 15:43:29 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.031 { 00:05:31.031 "nbd_device": "/dev/nbd0", 00:05:31.031 "bdev_name": "Malloc0" 00:05:31.031 }, 00:05:31.031 { 00:05:31.031 "nbd_device": "/dev/nbd1", 00:05:31.031 "bdev_name": "Malloc1" 00:05:31.031 } 00:05:31.031 ]' 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.031 { 00:05:31.031 "nbd_device": "/dev/nbd0", 00:05:31.031 "bdev_name": "Malloc0" 00:05:31.031 }, 00:05:31.031 { 00:05:31.031 "nbd_device": "/dev/nbd1", 00:05:31.031 "bdev_name": "Malloc1" 00:05:31.031 } 00:05:31.031 ]' 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.031 /dev/nbd1' 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.031 /dev/nbd1' 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.031 256+0 records in 00:05:31.031 256+0 records out 00:05:31.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011453 s, 91.6 MB/s 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.031 15:43:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.289 256+0 records in 00:05:31.289 256+0 records out 00:05:31.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193837 s, 54.1 MB/s 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.289 256+0 records in 00:05:31.289 256+0 records out 00:05:31.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209199 s, 50.1 MB/s 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.289 15:43:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.547 15:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.804 15:43:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.804 15:43:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.061 15:43:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.319 [2024-05-15 15:43:30.668240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.319 [2024-05-15 15:43:30.732351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.319 [2024-05-15 15:43:30.732353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.319 [2024-05-15 15:43:30.774003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.319 [2024-05-15 15:43:30.774057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.606 15:43:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.606 15:43:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.606 spdk_app_start Round 1 00:05:35.606 15:43:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3571440 /var/tmp/spdk-nbd.sock 00:05:35.606 15:43:33 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3571440 ']' 00:05:35.606 15:43:33 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.606 15:43:33 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.606 15:43:33 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.606 15:43:33 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.606 15:43:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.607 15:43:33 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:35.607 15:43:33 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:35.607 15:43:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.607 Malloc0 00:05:35.607 15:43:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.607 Malloc1 00:05:35.607 15:43:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.607 15:43:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.607 /dev/nbd0 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.865 1+0 records in 00:05:35.865 1+0 records out 00:05:35.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261213 s, 15.7 MB/s 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.865 /dev/nbd1 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.865 1+0 records in 00:05:35.865 1+0 records out 00:05:35.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026477 s, 15.5 MB/s 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:35.865 15:43:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.865 15:43:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.123 { 00:05:36.123 "nbd_device": "/dev/nbd0", 00:05:36.123 "bdev_name": "Malloc0" 00:05:36.123 }, 00:05:36.123 { 00:05:36.123 "nbd_device": "/dev/nbd1", 00:05:36.123 "bdev_name": "Malloc1" 00:05:36.123 } 00:05:36.123 ]' 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.123 { 00:05:36.123 "nbd_device": "/dev/nbd0", 00:05:36.123 "bdev_name": "Malloc0" 00:05:36.123 }, 00:05:36.123 { 00:05:36.123 "nbd_device": "/dev/nbd1", 00:05:36.123 "bdev_name": "Malloc1" 00:05:36.123 } 00:05:36.123 ]' 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.123 /dev/nbd1' 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.123 /dev/nbd1' 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.123 256+0 records in 00:05:36.123 256+0 records out 00:05:36.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114284 s, 91.8 MB/s 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.123 256+0 records in 00:05:36.123 256+0 records out 00:05:36.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195388 s, 53.7 MB/s 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.123 15:43:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.380 256+0 records in 00:05:36.380 256+0 records out 00:05:36.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021237 s, 49.4 MB/s 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.381 15:43:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.638 15:43:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.639 15:43:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.897 15:43:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.897 15:43:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.154 15:43:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.412 [2024-05-15 15:43:35.750401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.412 [2024-05-15 15:43:35.813536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.412 [2024-05-15 15:43:35.813538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.412 [2024-05-15 15:43:35.855978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.412 [2024-05-15 15:43:35.856023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.689 15:43:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.689 15:43:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.689 spdk_app_start Round 2 00:05:40.689 15:43:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3571440 /var/tmp/spdk-nbd.sock 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3571440 ']' 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.689 15:43:38 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:40.689 15:43:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.689 Malloc0 00:05:40.689 15:43:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.689 Malloc1 00:05:40.689 15:43:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.689 /dev/nbd0 00:05:40.689 15:43:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.948 1+0 records in 00:05:40.948 1+0 records out 00:05:40.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212874 s, 19.2 MB/s 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.948 /dev/nbd1 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.948 1+0 records in 00:05:40.948 1+0 records out 00:05:40.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257252 s, 15.9 MB/s 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:40.948 15:43:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.948 15:43:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.206 { 00:05:41.206 "nbd_device": "/dev/nbd0", 00:05:41.206 "bdev_name": "Malloc0" 00:05:41.206 }, 00:05:41.206 { 00:05:41.206 "nbd_device": "/dev/nbd1", 00:05:41.206 "bdev_name": "Malloc1" 00:05:41.206 } 00:05:41.206 ]' 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.206 { 00:05:41.206 "nbd_device": "/dev/nbd0", 00:05:41.206 "bdev_name": "Malloc0" 00:05:41.206 }, 00:05:41.206 { 00:05:41.206 "nbd_device": "/dev/nbd1", 00:05:41.206 "bdev_name": "Malloc1" 00:05:41.206 } 00:05:41.206 ]' 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.206 /dev/nbd1' 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.206 /dev/nbd1' 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.206 256+0 records in 00:05:41.206 256+0 records out 00:05:41.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011376 s, 92.2 MB/s 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.206 256+0 records in 00:05:41.206 256+0 records out 00:05:41.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182882 s, 57.3 MB/s 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.206 15:43:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.466 256+0 records in 00:05:41.466 256+0 records out 00:05:41.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208179 s, 50.4 MB/s 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.466 15:43:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.466 15:43:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.723 15:43:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.724 15:43:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.724 15:43:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.724 15:43:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.981 15:43:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.982 15:43:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.982 15:43:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.268 15:43:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.525 [2024-05-15 15:43:40.837233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.525 [2024-05-15 15:43:40.902735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.525 [2024-05-15 15:43:40.902737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.525 [2024-05-15 15:43:40.944783] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.525 [2024-05-15 15:43:40.944826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.805 15:43:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3571440 /var/tmp/spdk-nbd.sock 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3571440 ']' 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:45.805 15:43:43 event.app_repeat -- event/event.sh@39 -- # killprocess 3571440 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3571440 ']' 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3571440 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3571440 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3571440' 00:05:45.805 killing process with pid 3571440 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3571440 00:05:45.805 15:43:43 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3571440 00:05:45.805 spdk_app_start is called in Round 0. 00:05:45.805 Shutdown signal received, stop current app iteration 00:05:45.805 Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 reinitialization... 00:05:45.805 spdk_app_start is called in Round 1. 00:05:45.805 Shutdown signal received, stop current app iteration 00:05:45.805 Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 reinitialization... 00:05:45.805 spdk_app_start is called in Round 2. 00:05:45.805 Shutdown signal received, stop current app iteration 00:05:45.805 Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 reinitialization... 00:05:45.805 spdk_app_start is called in Round 3. 00:05:45.805 Shutdown signal received, stop current app iteration 00:05:45.805 15:43:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.805 15:43:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.805 00:05:45.805 real 0m16.346s 00:05:45.805 user 0m34.675s 00:05:45.805 sys 0m3.047s 00:05:45.805 15:43:44 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.805 15:43:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.805 ************************************ 00:05:45.805 END TEST app_repeat 00:05:45.805 ************************************ 00:05:45.805 15:43:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.805 15:43:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:45.805 15:43:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.805 15:43:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.805 15:43:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.805 ************************************ 00:05:45.805 START TEST cpu_locks 00:05:45.805 ************************************ 00:05:45.805 15:43:44 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:45.805 * Looking for test storage... 00:05:45.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:45.805 15:43:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.805 15:43:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.805 15:43:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.805 15:43:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.805 15:43:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.805 15:43:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.805 15:43:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.805 ************************************ 00:05:45.805 START TEST default_locks 00:05:45.805 ************************************ 00:05:45.805 15:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:45.805 15:43:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3574496 00:05:45.805 15:43:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3574496 00:05:45.805 15:43:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.805 15:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3574496 ']' 00:05:45.805 15:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.805 15:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.806 15:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.806 15:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.806 15:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.806 [2024-05-15 15:43:44.333720] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:45.806 [2024-05-15 15:43:44.333770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574496 ] 00:05:45.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.063 [2024-05-15 15:43:44.403387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.063 [2024-05-15 15:43:44.471955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.626 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.626 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:46.626 15:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3574496 00:05:46.626 15:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3574496 00:05:46.626 15:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.191 lslocks: write error 00:05:47.191 15:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3574496 00:05:47.191 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3574496 ']' 00:05:47.191 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3574496 00:05:47.191 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:47.191 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.191 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3574496 00:05:47.449 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.449 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.449 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3574496' 00:05:47.449 killing process with pid 3574496 00:05:47.449 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3574496 00:05:47.449 15:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3574496 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3574496 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3574496 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3574496 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3574496 ']' 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.706 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3574496) - No such process 00:05:47.707 ERROR: process (pid: 3574496) is no longer running 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.707 00:05:47.707 real 0m1.824s 00:05:47.707 user 0m1.910s 00:05:47.707 sys 0m0.666s 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.707 15:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.707 ************************************ 00:05:47.707 END TEST default_locks 00:05:47.707 ************************************ 00:05:47.707 15:43:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.707 15:43:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.707 15:43:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.707 15:43:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.707 ************************************ 00:05:47.707 START TEST default_locks_via_rpc 00:05:47.707 ************************************ 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3574895 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3574895 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3574895 ']' 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.707 15:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.707 [2024-05-15 15:43:46.248461] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:47.707 [2024-05-15 15:43:46.248513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574895 ] 00:05:47.964 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.964 [2024-05-15 15:43:46.317433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.964 [2024-05-15 15:43:46.388855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3574895 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3574895 00:05:48.528 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.785 15:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3574895 00:05:48.785 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3574895 ']' 00:05:48.785 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3574895 00:05:48.785 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:48.785 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.785 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3574895 00:05:49.043 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:49.043 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:49.043 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3574895' 00:05:49.043 killing process with pid 3574895 00:05:49.043 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3574895 00:05:49.043 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3574895 00:05:49.300 00:05:49.300 real 0m1.514s 00:05:49.300 user 0m1.562s 00:05:49.300 sys 0m0.523s 00:05:49.300 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.300 15:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.300 ************************************ 00:05:49.300 END TEST default_locks_via_rpc 00:05:49.300 ************************************ 00:05:49.300 15:43:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.300 15:43:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.300 15:43:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.300 15:43:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.300 ************************************ 00:05:49.300 START TEST non_locking_app_on_locked_coremask 00:05:49.300 ************************************ 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3575195 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3575195 /var/tmp/spdk.sock 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3575195 ']' 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.300 15:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.300 [2024-05-15 15:43:47.832096] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:49.300 [2024-05-15 15:43:47.832140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3575195 ] 00:05:49.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.556 [2024-05-15 15:43:47.900937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.556 [2024-05-15 15:43:47.974320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3575275 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3575275 /var/tmp/spdk2.sock 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3575275 ']' 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.119 15:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.119 [2024-05-15 15:43:48.661804] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:50.119 [2024-05-15 15:43:48.661854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3575275 ] 00:05:50.376 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.376 [2024-05-15 15:43:48.758573] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.376 [2024-05-15 15:43:48.758599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.376 [2024-05-15 15:43:48.901890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.941 15:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.941 15:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:50.941 15:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3575195 00:05:50.941 15:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3575195 00:05:50.941 15:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.314 lslocks: write error 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3575195 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3575195 ']' 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3575195 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575195 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575195' 00:05:52.314 killing process with pid 3575195 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3575195 00:05:52.314 15:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3575195 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3575275 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3575275 ']' 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3575275 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575275 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575275' 00:05:52.878 killing process with pid 3575275 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3575275 00:05:52.878 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3575275 00:05:53.443 00:05:53.443 real 0m3.982s 00:05:53.443 user 0m4.252s 00:05:53.443 sys 0m1.313s 00:05:53.443 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.443 15:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.443 ************************************ 00:05:53.443 END TEST non_locking_app_on_locked_coremask 00:05:53.443 ************************************ 00:05:53.443 15:43:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.443 15:43:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.443 15:43:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.443 15:43:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.443 ************************************ 00:05:53.443 START TEST locking_app_on_unlocked_coremask 00:05:53.443 ************************************ 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3575915 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3575915 /var/tmp/spdk.sock 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3575915 ']' 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.443 15:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.443 [2024-05-15 15:43:51.903093] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:53.443 [2024-05-15 15:43:51.903139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3575915 ] 00:05:53.443 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.443 [2024-05-15 15:43:51.972709] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.443 [2024-05-15 15:43:51.972736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.700 [2024-05-15 15:43:52.040595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.264 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.264 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:54.264 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3576048 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3576048 /var/tmp/spdk2.sock 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3576048 ']' 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.265 15:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.265 [2024-05-15 15:43:52.744716] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:54.265 [2024-05-15 15:43:52.744766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576048 ] 00:05:54.265 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.523 [2024-05-15 15:43:52.839151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.523 [2024-05-15 15:43:52.982850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.096 15:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.096 15:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:55.096 15:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3576048 00:05:55.096 15:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3576048 00:05:55.096 15:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.474 lslocks: write error 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3575915 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3575915 ']' 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3575915 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575915 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575915' 00:05:56.474 killing process with pid 3575915 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3575915 00:05:56.474 15:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3575915 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3576048 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3576048 ']' 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3576048 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3576048 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:57.042 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3576048' 00:05:57.042 killing process with pid 3576048 00:05:57.043 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3576048 00:05:57.043 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3576048 00:05:57.301 00:05:57.301 real 0m3.969s 00:05:57.301 user 0m4.230s 00:05:57.301 sys 0m1.313s 00:05:57.301 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.301 15:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.301 ************************************ 00:05:57.301 END TEST locking_app_on_unlocked_coremask 00:05:57.301 ************************************ 00:05:57.301 15:43:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.301 15:43:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.301 15:43:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.301 15:43:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.591 ************************************ 00:05:57.591 START TEST locking_app_on_locked_coremask 00:05:57.591 ************************************ 00:05:57.591 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:57.591 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.591 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3576614 00:05:57.591 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3576614 /var/tmp/spdk.sock 00:05:57.591 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3576614 ']' 00:05:57.591 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.591 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.592 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.592 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.592 15:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.592 [2024-05-15 15:43:55.956721] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:57.592 [2024-05-15 15:43:55.956769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576614 ] 00:05:57.592 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.592 [2024-05-15 15:43:56.026161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.592 [2024-05-15 15:43:56.100439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3576872 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3576872 /var/tmp/spdk2.sock 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3576872 /var/tmp/spdk2.sock 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3576872 /var/tmp/spdk2.sock 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3576872 ']' 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.524 15:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.524 [2024-05-15 15:43:56.809533] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:05:58.524 [2024-05-15 15:43:56.809590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576872 ] 00:05:58.524 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.524 [2024-05-15 15:43:56.904314] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3576614 has claimed it. 00:05:58.524 [2024-05-15 15:43:56.904352] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3576872) - No such process 00:05:59.091 ERROR: process (pid: 3576872) is no longer running 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3576614 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3576614 00:05:59.091 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.658 lslocks: write error 00:05:59.658 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3576614 00:05:59.658 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3576614 ']' 00:05:59.658 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3576614 00:05:59.658 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:59.658 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.658 15:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3576614 00:05:59.658 15:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.658 15:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.658 15:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3576614' 00:05:59.658 killing process with pid 3576614 00:05:59.658 15:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3576614 00:05:59.658 15:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3576614 00:05:59.916 00:05:59.916 real 0m2.465s 00:05:59.916 user 0m2.723s 00:05:59.916 sys 0m0.748s 00:05:59.916 15:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.916 15:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.916 ************************************ 00:05:59.916 END TEST locking_app_on_locked_coremask 00:05:59.916 ************************************ 00:05:59.916 15:43:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:59.916 15:43:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.916 15:43:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.916 15:43:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.916 ************************************ 00:05:59.916 START TEST locking_overlapped_coremask 00:05:59.916 ************************************ 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3577176 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3577176 /var/tmp/spdk.sock 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3577176 ']' 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.916 15:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.174 [2024-05-15 15:43:58.493521] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:00.174 [2024-05-15 15:43:58.493564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577176 ] 00:06:00.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.174 [2024-05-15 15:43:58.562144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.174 [2024-05-15 15:43:58.639048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.174 [2024-05-15 15:43:58.639145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.174 [2024-05-15 15:43:58.639145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3577200 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3577200 /var/tmp/spdk2.sock 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3577200 /var/tmp/spdk2.sock 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3577200 /var/tmp/spdk2.sock 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3577200 ']' 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.107 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.108 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.108 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.108 [2024-05-15 15:43:59.366703] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:01.108 [2024-05-15 15:43:59.366755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577200 ] 00:06:01.108 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.108 [2024-05-15 15:43:59.461776] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3577176 has claimed it. 00:06:01.108 [2024-05-15 15:43:59.461809] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3577200) - No such process 00:06:01.673 ERROR: process (pid: 3577200) is no longer running 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3577176 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3577176 ']' 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3577176 00:06:01.673 15:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:01.673 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.673 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577176 00:06:01.673 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.673 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.673 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577176' 00:06:01.673 killing process with pid 3577176 00:06:01.673 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3577176 00:06:01.673 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3577176 00:06:01.932 00:06:01.932 real 0m1.932s 00:06:01.932 user 0m5.435s 00:06:01.932 sys 0m0.433s 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.932 ************************************ 00:06:01.932 END TEST locking_overlapped_coremask 00:06:01.932 ************************************ 00:06:01.932 15:44:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:01.932 15:44:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.932 15:44:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.932 15:44:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.932 ************************************ 00:06:01.932 START TEST locking_overlapped_coremask_via_rpc 00:06:01.932 ************************************ 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3577486 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3577486 /var/tmp/spdk.sock 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3577486 ']' 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.932 15:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.190 [2024-05-15 15:44:00.530611] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:02.190 [2024-05-15 15:44:00.530655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577486 ] 00:06:02.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.190 [2024-05-15 15:44:00.599413] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.190 [2024-05-15 15:44:00.599438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.190 [2024-05-15 15:44:00.676977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.190 [2024-05-15 15:44:00.677069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.190 [2024-05-15 15:44:00.677074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3577733 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3577733 /var/tmp/spdk2.sock 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3577733 ']' 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.123 15:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.123 [2024-05-15 15:44:01.378305] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:03.123 [2024-05-15 15:44:01.378360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577733 ] 00:06:03.123 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.123 [2024-05-15 15:44:01.479375] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.123 [2024-05-15 15:44:01.479404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.123 [2024-05-15 15:44:01.630448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.123 [2024-05-15 15:44:01.630566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.123 [2024-05-15 15:44:01.630567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.690 [2024-05-15 15:44:02.210265] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3577486 has claimed it. 00:06:03.690 request: 00:06:03.690 { 00:06:03.690 "method": "framework_enable_cpumask_locks", 00:06:03.690 "req_id": 1 00:06:03.690 } 00:06:03.690 Got JSON-RPC error response 00:06:03.690 response: 00:06:03.690 { 00:06:03.690 "code": -32603, 00:06:03.690 "message": "Failed to claim CPU core: 2" 00:06:03.690 } 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3577486 /var/tmp/spdk.sock 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3577486 ']' 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.690 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3577733 /var/tmp/spdk2.sock 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3577733 ']' 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.948 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.205 00:06:04.205 real 0m2.105s 00:06:04.205 user 0m0.842s 00:06:04.205 sys 0m0.194s 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.205 15:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.205 ************************************ 00:06:04.205 END TEST locking_overlapped_coremask_via_rpc 00:06:04.205 ************************************ 00:06:04.205 15:44:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:04.205 15:44:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3577486 ]] 00:06:04.205 15:44:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3577486 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3577486 ']' 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3577486 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577486 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577486' 00:06:04.205 killing process with pid 3577486 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3577486 00:06:04.205 15:44:02 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3577486 00:06:04.462 15:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3577733 ]] 00:06:04.462 15:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3577733 00:06:04.462 15:44:03 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3577733 ']' 00:06:04.462 15:44:03 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3577733 00:06:04.462 15:44:03 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:04.720 15:44:03 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.720 15:44:03 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577733 00:06:04.720 15:44:03 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:04.720 15:44:03 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:04.720 15:44:03 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577733' 00:06:04.720 killing process with pid 3577733 00:06:04.720 15:44:03 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3577733 00:06:04.720 15:44:03 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3577733 00:06:04.978 15:44:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.978 15:44:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.978 15:44:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3577486 ]] 00:06:04.978 15:44:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3577486 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3577486 ']' 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3577486 00:06:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3577486) - No such process 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3577486 is not found' 00:06:04.978 Process with pid 3577486 is not found 00:06:04.978 15:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3577733 ]] 00:06:04.978 15:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3577733 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3577733 ']' 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3577733 00:06:04.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3577733) - No such process 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3577733 is not found' 00:06:04.978 Process with pid 3577733 is not found 00:06:04.978 15:44:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.978 00:06:04.978 real 0m19.304s 00:06:04.978 user 0m31.670s 00:06:04.978 sys 0m6.257s 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.978 15:44:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.978 ************************************ 00:06:04.978 END TEST cpu_locks 00:06:04.978 ************************************ 00:06:04.979 00:06:04.979 real 0m45.729s 00:06:04.979 user 1m25.291s 00:06:04.979 sys 0m10.408s 00:06:04.979 15:44:03 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.979 15:44:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.979 ************************************ 00:06:04.979 END TEST event 00:06:04.979 ************************************ 00:06:04.979 15:44:03 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.979 15:44:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.979 15:44:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.979 15:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.237 ************************************ 00:06:05.237 START TEST thread 00:06:05.237 ************************************ 00:06:05.237 15:44:03 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:05.237 * Looking for test storage... 00:06:05.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:05.237 15:44:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.237 15:44:03 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:05.237 15:44:03 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.237 15:44:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.237 ************************************ 00:06:05.237 START TEST thread_poller_perf 00:06:05.237 ************************************ 00:06:05.237 15:44:03 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.237 [2024-05-15 15:44:03.748246] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:05.237 [2024-05-15 15:44:03.748327] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578123 ] 00:06:05.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.495 [2024-05-15 15:44:03.820784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.495 [2024-05-15 15:44:03.889630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.495 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.432 ====================================== 00:06:06.432 busy:2508305862 (cyc) 00:06:06.432 total_run_count: 424000 00:06:06.432 tsc_hz: 2500000000 (cyc) 00:06:06.432 ====================================== 00:06:06.432 poller_cost: 5915 (cyc), 2366 (nsec) 00:06:06.432 00:06:06.432 real 0m1.259s 00:06:06.432 user 0m1.166s 00:06:06.432 sys 0m0.090s 00:06:06.432 15:44:04 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.432 15:44:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.433 ************************************ 00:06:06.433 END TEST thread_poller_perf 00:06:06.433 ************************************ 00:06:06.691 15:44:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.691 15:44:05 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:06.691 15:44:05 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.691 15:44:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.691 ************************************ 00:06:06.691 START TEST thread_poller_perf 00:06:06.691 ************************************ 00:06:06.691 15:44:05 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.691 [2024-05-15 15:44:05.098206] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:06.691 [2024-05-15 15:44:05.098289] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578413 ] 00:06:06.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.691 [2024-05-15 15:44:05.171134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.691 [2024-05-15 15:44:05.236500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.691 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.070 ====================================== 00:06:08.070 busy:2501645196 (cyc) 00:06:08.070 total_run_count: 5631000 00:06:08.070 tsc_hz: 2500000000 (cyc) 00:06:08.070 ====================================== 00:06:08.070 poller_cost: 444 (cyc), 177 (nsec) 00:06:08.070 00:06:08.070 real 0m1.251s 00:06:08.070 user 0m1.167s 00:06:08.070 sys 0m0.080s 00:06:08.070 15:44:06 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.070 15:44:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.070 ************************************ 00:06:08.070 END TEST thread_poller_perf 00:06:08.070 ************************************ 00:06:08.070 15:44:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.070 00:06:08.070 real 0m2.804s 00:06:08.070 user 0m2.448s 00:06:08.070 sys 0m0.360s 00:06:08.070 15:44:06 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.070 15:44:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.070 ************************************ 00:06:08.070 END TEST thread 00:06:08.070 ************************************ 00:06:08.070 15:44:06 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:08.070 15:44:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.070 15:44:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.070 15:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.070 ************************************ 00:06:08.070 START TEST accel 00:06:08.070 ************************************ 00:06:08.070 15:44:06 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:08.070 * Looking for test storage... 00:06:08.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:08.070 15:44:06 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:08.070 15:44:06 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:08.070 15:44:06 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.070 15:44:06 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3578735 00:06:08.070 15:44:06 accel -- accel/accel.sh@63 -- # waitforlisten 3578735 00:06:08.070 15:44:06 accel -- common/autotest_common.sh@827 -- # '[' -z 3578735 ']' 00:06:08.070 15:44:06 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.070 15:44:06 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.070 15:44:06 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:08.070 15:44:06 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.070 15:44:06 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:08.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.070 15:44:06 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.070 15:44:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.070 15:44:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.070 15:44:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.070 15:44:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.070 15:44:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.070 15:44:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.070 15:44:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:08.070 15:44:06 accel -- accel/accel.sh@41 -- # jq -r . 00:06:08.070 [2024-05-15 15:44:06.605172] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:08.070 [2024-05-15 15:44:06.605223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578735 ] 00:06:08.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.329 [2024-05-15 15:44:06.673543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.329 [2024-05-15 15:44:06.742963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@860 -- # return 0 00:06:08.898 15:44:07 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:08.898 15:44:07 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:08.898 15:44:07 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:08.898 15:44:07 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:08.898 15:44:07 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:08.898 15:44:07 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:08.898 15:44:07 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.898 15:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.898 15:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.898 15:44:07 accel -- accel/accel.sh@75 -- # killprocess 3578735 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@946 -- # '[' -z 3578735 ']' 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@950 -- # kill -0 3578735 00:06:08.898 15:44:07 accel -- common/autotest_common.sh@951 -- # uname 00:06:08.899 15:44:07 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.899 15:44:07 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3578735 00:06:09.158 15:44:07 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.158 15:44:07 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.158 15:44:07 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3578735' 00:06:09.158 killing process with pid 3578735 00:06:09.158 15:44:07 accel -- common/autotest_common.sh@965 -- # kill 3578735 00:06:09.158 15:44:07 accel -- common/autotest_common.sh@970 -- # wait 3578735 00:06:09.417 15:44:07 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:09.418 15:44:07 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:09.418 15:44:07 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:09.418 15:44:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.418 15:44:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.418 15:44:07 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:09.418 15:44:07 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:09.418 15:44:07 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.418 15:44:07 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:09.418 15:44:07 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:09.418 15:44:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:09.418 15:44:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.418 15:44:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.677 ************************************ 00:06:09.677 START TEST accel_missing_filename 00:06:09.677 ************************************ 00:06:09.677 15:44:07 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:09.677 15:44:07 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:09.677 15:44:07 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:09.677 15:44:07 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:09.677 15:44:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.677 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:09.677 15:44:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.677 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:09.677 15:44:08 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:09.677 [2024-05-15 15:44:08.031246] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:09.677 [2024-05-15 15:44:08.031312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579037 ] 00:06:09.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.677 [2024-05-15 15:44:08.102646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.677 [2024-05-15 15:44:08.176067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.677 [2024-05-15 15:44:08.217383] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.936 [2024-05-15 15:44:08.276703] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:09.936 A filename is required. 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.936 00:06:09.936 real 0m0.365s 00:06:09.936 user 0m0.261s 00:06:09.936 sys 0m0.142s 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.936 15:44:08 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:09.936 ************************************ 00:06:09.936 END TEST accel_missing_filename 00:06:09.936 ************************************ 00:06:09.936 15:44:08 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.936 15:44:08 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:09.936 15:44:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.936 15:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.936 ************************************ 00:06:09.936 START TEST accel_compress_verify 00:06:09.936 ************************************ 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.936 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:09.936 15:44:08 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:09.937 [2024-05-15 15:44:08.485732] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:09.937 [2024-05-15 15:44:08.485787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579068 ] 00:06:10.196 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.196 [2024-05-15 15:44:08.556782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.196 [2024-05-15 15:44:08.626200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.196 [2024-05-15 15:44:08.667509] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.196 [2024-05-15 15:44:08.728019] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:10.456 00:06:10.456 Compression does not support the verify option, aborting. 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.456 00:06:10.456 real 0m0.364s 00:06:10.456 user 0m0.264s 00:06:10.456 sys 0m0.140s 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.456 15:44:08 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:10.456 ************************************ 00:06:10.456 END TEST accel_compress_verify 00:06:10.456 ************************************ 00:06:10.456 15:44:08 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:10.456 15:44:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:10.456 15:44:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.456 15:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.456 ************************************ 00:06:10.456 START TEST accel_wrong_workload 00:06:10.456 ************************************ 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:10.456 15:44:08 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:10.456 Unsupported workload type: foobar 00:06:10.456 [2024-05-15 15:44:08.940352] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:10.456 accel_perf options: 00:06:10.456 [-h help message] 00:06:10.456 [-q queue depth per core] 00:06:10.456 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.456 [-T number of threads per core 00:06:10.456 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.456 [-t time in seconds] 00:06:10.456 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.456 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.456 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.456 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.456 [-S for crc32c workload, use this seed value (default 0) 00:06:10.456 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.456 [-f for fill workload, use this BYTE value (default 255) 00:06:10.456 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.456 [-y verify result if this switch is on] 00:06:10.456 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.456 Can be used to spread operations across a wider range of memory. 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.456 00:06:10.456 real 0m0.036s 00:06:10.456 user 0m0.021s 00:06:10.456 sys 0m0.015s 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.456 15:44:08 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:10.456 ************************************ 00:06:10.456 END TEST accel_wrong_workload 00:06:10.456 ************************************ 00:06:10.456 Error: writing output failed: Broken pipe 00:06:10.456 15:44:08 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.456 15:44:08 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:10.456 15:44:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.456 15:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.716 ************************************ 00:06:10.716 START TEST accel_negative_buffers 00:06:10.716 ************************************ 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:10.716 15:44:09 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:10.716 -x option must be non-negative. 00:06:10.716 [2024-05-15 15:44:09.057492] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:10.716 accel_perf options: 00:06:10.716 [-h help message] 00:06:10.716 [-q queue depth per core] 00:06:10.716 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.716 [-T number of threads per core 00:06:10.716 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.716 [-t time in seconds] 00:06:10.716 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.716 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.716 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.716 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.716 [-S for crc32c workload, use this seed value (default 0) 00:06:10.716 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.716 [-f for fill workload, use this BYTE value (default 255) 00:06:10.716 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.716 [-y verify result if this switch is on] 00:06:10.716 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.716 Can be used to spread operations across a wider range of memory. 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.716 00:06:10.716 real 0m0.025s 00:06:10.716 user 0m0.014s 00:06:10.716 sys 0m0.011s 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.716 15:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:10.716 ************************************ 00:06:10.716 END TEST accel_negative_buffers 00:06:10.716 ************************************ 00:06:10.716 Error: writing output failed: Broken pipe 00:06:10.716 15:44:09 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:10.716 15:44:09 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:10.716 15:44:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.716 15:44:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.716 ************************************ 00:06:10.716 START TEST accel_crc32c 00:06:10.716 ************************************ 00:06:10.716 15:44:09 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:10.716 15:44:09 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:10.716 [2024-05-15 15:44:09.180050] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:10.716 [2024-05-15 15:44:09.180122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579381 ] 00:06:10.717 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.717 [2024-05-15 15:44:09.252639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.975 [2024-05-15 15:44:09.328649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.975 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.975 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.976 15:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:12.392 15:44:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.392 00:06:12.392 real 0m1.379s 00:06:12.392 user 0m1.247s 00:06:12.392 sys 0m0.145s 00:06:12.392 15:44:10 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.392 15:44:10 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:12.392 ************************************ 00:06:12.392 END TEST accel_crc32c 00:06:12.392 ************************************ 00:06:12.392 15:44:10 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:12.392 15:44:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:12.392 15:44:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.392 15:44:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.392 ************************************ 00:06:12.392 START TEST accel_crc32c_C2 00:06:12.392 ************************************ 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.392 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:12.393 [2024-05-15 15:44:10.651166] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:12.393 [2024-05-15 15:44:10.651234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579664 ] 00:06:12.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.393 [2024-05-15 15:44:10.722533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.393 [2024-05-15 15:44:10.794775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.393 15:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.771 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.772 00:06:13.772 real 0m1.373s 00:06:13.772 user 0m1.254s 00:06:13.772 sys 0m0.134s 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.772 15:44:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:13.772 ************************************ 00:06:13.772 END TEST accel_crc32c_C2 00:06:13.772 ************************************ 00:06:13.772 15:44:12 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:13.772 15:44:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:13.772 15:44:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.772 15:44:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.772 ************************************ 00:06:13.772 START TEST accel_copy 00:06:13.772 ************************************ 00:06:13.772 15:44:12 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:13.772 [2024-05-15 15:44:12.117085] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:13.772 [2024-05-15 15:44:12.117143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579913 ] 00:06:13.772 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.772 [2024-05-15 15:44:12.204369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.772 [2024-05-15 15:44:12.272596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.772 15:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:15.147 15:44:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.147 00:06:15.147 real 0m1.384s 00:06:15.147 user 0m1.256s 00:06:15.147 sys 0m0.140s 00:06:15.147 15:44:13 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.147 15:44:13 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:15.147 ************************************ 00:06:15.147 END TEST accel_copy 00:06:15.147 ************************************ 00:06:15.147 15:44:13 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.147 15:44:13 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:15.147 15:44:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.147 15:44:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.147 ************************************ 00:06:15.147 START TEST accel_fill 00:06:15.147 ************************************ 00:06:15.147 15:44:13 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:15.147 15:44:13 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:15.147 [2024-05-15 15:44:13.597024] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:15.147 [2024-05-15 15:44:13.597121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580176 ] 00:06:15.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.147 [2024-05-15 15:44:13.673458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.406 [2024-05-15 15:44:13.743922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.406 15:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:16.783 15:44:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.783 00:06:16.783 real 0m1.379s 00:06:16.783 user 0m1.257s 00:06:16.783 sys 0m0.135s 00:06:16.783 15:44:14 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.783 15:44:14 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:16.783 ************************************ 00:06:16.783 END TEST accel_fill 00:06:16.783 ************************************ 00:06:16.783 15:44:14 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:16.783 15:44:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:16.783 15:44:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.783 15:44:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.783 ************************************ 00:06:16.783 START TEST accel_copy_crc32c 00:06:16.783 ************************************ 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:16.783 [2024-05-15 15:44:15.061317] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:16.783 [2024-05-15 15:44:15.061371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580418 ] 00:06:16.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.783 [2024-05-15 15:44:15.130362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.783 [2024-05-15 15:44:15.200090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.783 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.784 15:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.162 00:06:18.162 real 0m1.368s 00:06:18.162 user 0m1.248s 00:06:18.162 sys 0m0.134s 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.162 15:44:16 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:18.162 ************************************ 00:06:18.162 END TEST accel_copy_crc32c 00:06:18.162 ************************************ 00:06:18.162 15:44:16 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.162 15:44:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:18.162 15:44:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.162 15:44:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.162 ************************************ 00:06:18.162 START TEST accel_copy_crc32c_C2 00:06:18.162 ************************************ 00:06:18.162 15:44:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.162 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.162 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:18.162 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.162 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.162 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:18.162 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:18.163 [2024-05-15 15:44:16.518670] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:18.163 [2024-05-15 15:44:16.518729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580665 ] 00:06:18.163 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.163 [2024-05-15 15:44:16.588892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.163 [2024-05-15 15:44:16.658409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.163 15:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.540 00:06:19.540 real 0m1.368s 00:06:19.540 user 0m1.245s 00:06:19.540 sys 0m0.138s 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.540 15:44:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:19.540 ************************************ 00:06:19.540 END TEST accel_copy_crc32c_C2 00:06:19.540 ************************************ 00:06:19.540 15:44:17 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:19.540 15:44:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:19.540 15:44:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.540 15:44:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.540 ************************************ 00:06:19.540 START TEST accel_dualcast 00:06:19.540 ************************************ 00:06:19.540 15:44:17 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:19.540 15:44:17 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:19.540 [2024-05-15 15:44:17.975714] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:19.540 [2024-05-15 15:44:17.975776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580899 ] 00:06:19.540 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.540 [2024-05-15 15:44:18.045686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.798 [2024-05-15 15:44:18.117918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:19.798 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.799 15:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:21.173 15:44:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.173 00:06:21.173 real 0m1.369s 00:06:21.173 user 0m1.248s 00:06:21.173 sys 0m0.135s 00:06:21.173 15:44:19 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.173 15:44:19 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:21.173 ************************************ 00:06:21.173 END TEST accel_dualcast 00:06:21.173 ************************************ 00:06:21.173 15:44:19 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:21.173 15:44:19 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:21.173 15:44:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.173 15:44:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.173 ************************************ 00:06:21.173 START TEST accel_compare 00:06:21.173 ************************************ 00:06:21.173 15:44:19 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:21.173 [2024-05-15 15:44:19.437611] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:21.173 [2024-05-15 15:44:19.437688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581146 ] 00:06:21.173 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.173 [2024-05-15 15:44:19.509138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.173 [2024-05-15 15:44:19.579565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.173 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 15:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:22.549 15:44:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.549 00:06:22.549 real 0m1.372s 00:06:22.549 user 0m1.247s 00:06:22.549 sys 0m0.137s 00:06:22.549 15:44:20 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.549 15:44:20 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:22.549 ************************************ 00:06:22.549 END TEST accel_compare 00:06:22.549 ************************************ 00:06:22.549 15:44:20 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:22.549 15:44:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:22.549 15:44:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.549 15:44:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.549 ************************************ 00:06:22.549 START TEST accel_xor 00:06:22.549 ************************************ 00:06:22.549 15:44:20 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:22.549 15:44:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:22.549 [2024-05-15 15:44:20.897809] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:22.549 [2024-05-15 15:44:20.897866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581413 ] 00:06:22.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.549 [2024-05-15 15:44:20.969321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.549 [2024-05-15 15:44:21.039558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.549 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.550 15:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.922 00:06:23.922 real 0m1.369s 00:06:23.922 user 0m1.244s 00:06:23.922 sys 0m0.140s 00:06:23.922 15:44:22 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.922 15:44:22 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:23.922 ************************************ 00:06:23.922 END TEST accel_xor 00:06:23.922 ************************************ 00:06:23.922 15:44:22 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:23.922 15:44:22 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:23.922 15:44:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.922 15:44:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.922 ************************************ 00:06:23.922 START TEST accel_xor 00:06:23.922 ************************************ 00:06:23.922 15:44:22 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:23.922 15:44:22 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:23.922 [2024-05-15 15:44:22.356117] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:23.922 [2024-05-15 15:44:22.356177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581703 ] 00:06:23.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.922 [2024-05-15 15:44:22.426926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.180 [2024-05-15 15:44:22.503107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:24.180 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.181 15:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:25.553 15:44:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.553 00:06:25.553 real 0m1.374s 00:06:25.553 user 0m1.254s 00:06:25.553 sys 0m0.133s 00:06:25.553 15:44:23 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.554 15:44:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:25.554 ************************************ 00:06:25.554 END TEST accel_xor 00:06:25.554 ************************************ 00:06:25.554 15:44:23 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:25.554 15:44:23 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:25.554 15:44:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.554 15:44:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.554 ************************************ 00:06:25.554 START TEST accel_dif_verify 00:06:25.554 ************************************ 00:06:25.554 15:44:23 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:25.554 [2024-05-15 15:44:23.821171] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:25.554 [2024-05-15 15:44:23.821242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581988 ] 00:06:25.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.554 [2024-05-15 15:44:23.892260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.554 [2024-05-15 15:44:23.958415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.554 15:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:26.928 15:44:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.928 00:06:26.928 real 0m1.363s 00:06:26.928 user 0m1.246s 00:06:26.928 sys 0m0.132s 00:06:26.928 15:44:25 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.928 15:44:25 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.928 ************************************ 00:06:26.928 END TEST accel_dif_verify 00:06:26.928 ************************************ 00:06:26.928 15:44:25 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:26.928 15:44:25 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:26.928 15:44:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.928 15:44:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.928 ************************************ 00:06:26.928 START TEST accel_dif_generate 00:06:26.928 ************************************ 00:06:26.928 15:44:25 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:26.928 [2024-05-15 15:44:25.275143] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:26.928 [2024-05-15 15:44:25.275231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582269 ] 00:06:26.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.928 [2024-05-15 15:44:25.345066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.928 [2024-05-15 15:44:25.414534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.928 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 15:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:28.359 15:44:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.359 00:06:28.359 real 0m1.364s 00:06:28.359 user 0m1.248s 00:06:28.359 sys 0m0.131s 00:06:28.359 15:44:26 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.359 15:44:26 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:28.359 ************************************ 00:06:28.359 END TEST accel_dif_generate 00:06:28.359 ************************************ 00:06:28.359 15:44:26 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.359 15:44:26 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:28.359 15:44:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.359 15:44:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.359 ************************************ 00:06:28.359 START TEST accel_dif_generate_copy 00:06:28.359 ************************************ 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:28.359 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:28.359 [2024-05-15 15:44:26.730221] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:28.359 [2024-05-15 15:44:26.730292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582558 ] 00:06:28.359 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.359 [2024-05-15 15:44:26.801454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.359 [2024-05-15 15:44:26.870265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.618 15:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.550 00:06:29.550 real 0m1.366s 00:06:29.550 user 0m1.246s 00:06:29.550 sys 0m0.134s 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.550 15:44:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.550 ************************************ 00:06:29.550 END TEST accel_dif_generate_copy 00:06:29.550 ************************************ 00:06:29.550 15:44:28 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:29.550 15:44:28 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.550 15:44:28 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:29.550 15:44:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.550 15:44:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.809 ************************************ 00:06:29.809 START TEST accel_comp 00:06:29.809 ************************************ 00:06:29.809 15:44:28 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:29.809 15:44:28 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:29.809 [2024-05-15 15:44:28.190653] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:29.809 [2024-05-15 15:44:28.190713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582845 ] 00:06:29.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.809 [2024-05-15 15:44:28.260730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.809 [2024-05-15 15:44:28.331126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.067 15:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:31.002 15:44:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.002 00:06:31.002 real 0m1.372s 00:06:31.002 user 0m1.248s 00:06:31.002 sys 0m0.140s 00:06:31.002 15:44:29 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.002 15:44:29 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:31.002 ************************************ 00:06:31.002 END TEST accel_comp 00:06:31.002 ************************************ 00:06:31.260 15:44:29 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.260 15:44:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:31.260 15:44:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.260 15:44:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.260 ************************************ 00:06:31.260 START TEST accel_decomp 00:06:31.260 ************************************ 00:06:31.260 15:44:29 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.260 15:44:29 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.261 15:44:29 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:31.261 15:44:29 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:31.261 [2024-05-15 15:44:29.649474] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:31.261 [2024-05-15 15:44:29.649534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583130 ] 00:06:31.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.261 [2024-05-15 15:44:29.718824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.261 [2024-05-15 15:44:29.788527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.529 15:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.466 15:44:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.466 00:06:32.467 real 0m1.369s 00:06:32.467 user 0m1.247s 00:06:32.467 sys 0m0.137s 00:06:32.467 15:44:30 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.467 15:44:30 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:32.467 ************************************ 00:06:32.467 END TEST accel_decomp 00:06:32.467 ************************************ 00:06:32.725 15:44:31 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.725 15:44:31 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:32.725 15:44:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.725 15:44:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.725 ************************************ 00:06:32.725 START TEST accel_decmop_full 00:06:32.725 ************************************ 00:06:32.725 15:44:31 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:32.725 15:44:31 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:32.725 [2024-05-15 15:44:31.108447] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:32.725 [2024-05-15 15:44:31.108509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583411 ] 00:06:32.725 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.725 [2024-05-15 15:44:31.181928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.725 [2024-05-15 15:44:31.252901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.984 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.985 15:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.921 15:44:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.921 00:06:33.921 real 0m1.387s 00:06:33.921 user 0m1.262s 00:06:33.921 sys 0m0.139s 00:06:33.921 15:44:32 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.921 15:44:32 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:33.921 ************************************ 00:06:33.921 END TEST accel_decmop_full 00:06:33.921 ************************************ 00:06:34.180 15:44:32 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.180 15:44:32 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:34.180 15:44:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.180 15:44:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.180 ************************************ 00:06:34.180 START TEST accel_decomp_mcore 00:06:34.180 ************************************ 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:34.180 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:34.180 [2024-05-15 15:44:32.583036] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:34.180 [2024-05-15 15:44:32.583097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583696 ] 00:06:34.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.180 [2024-05-15 15:44:32.651923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.180 [2024-05-15 15:44:32.723744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.180 [2024-05-15 15:44:32.723839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.180 [2024-05-15 15:44:32.723900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.180 [2024-05-15 15:44:32.723906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.438 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.439 15:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.374 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.374 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.374 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.375 00:06:35.375 real 0m1.378s 00:06:35.375 user 0m4.582s 00:06:35.375 sys 0m0.140s 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.375 15:44:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:35.375 ************************************ 00:06:35.375 END TEST accel_decomp_mcore 00:06:35.375 ************************************ 00:06:35.634 15:44:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.634 15:44:33 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:35.634 15:44:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.634 15:44:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.634 ************************************ 00:06:35.634 START TEST accel_decomp_full_mcore 00:06:35.634 ************************************ 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:35.634 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:35.634 [2024-05-15 15:44:34.050672] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:35.634 [2024-05-15 15:44:34.050730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583986 ] 00:06:35.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.634 [2024-05-15 15:44:34.119764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.634 [2024-05-15 15:44:34.194184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.634 [2024-05-15 15:44:34.194284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.634 [2024-05-15 15:44:34.194306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.634 [2024-05-15 15:44:34.194310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.894 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.895 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.895 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.895 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.895 15:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.270 00:06:37.270 real 0m1.387s 00:06:37.270 user 0m4.597s 00:06:37.270 sys 0m0.149s 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.270 15:44:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:37.270 ************************************ 00:06:37.270 END TEST accel_decomp_full_mcore 00:06:37.270 ************************************ 00:06:37.270 15:44:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.270 15:44:35 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:37.270 15:44:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.270 15:44:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.270 ************************************ 00:06:37.270 START TEST accel_decomp_mthread 00:06:37.270 ************************************ 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.270 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:37.271 [2024-05-15 15:44:35.532115] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:37.271 [2024-05-15 15:44:35.532197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584268 ] 00:06:37.271 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.271 [2024-05-15 15:44:35.603828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.271 [2024-05-15 15:44:35.674055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.271 15:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.648 00:06:38.648 real 0m1.376s 00:06:38.648 user 0m1.260s 00:06:38.648 sys 0m0.131s 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.648 15:44:36 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:38.648 ************************************ 00:06:38.648 END TEST accel_decomp_mthread 00:06:38.648 ************************************ 00:06:38.648 15:44:36 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.648 15:44:36 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:38.648 15:44:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.648 15:44:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.648 ************************************ 00:06:38.649 START TEST accel_decomp_full_mthread 00:06:38.649 ************************************ 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:38.649 15:44:36 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:38.649 [2024-05-15 15:44:36.999230] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:38.649 [2024-05-15 15:44:36.999304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584557 ] 00:06:38.649 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.649 [2024-05-15 15:44:37.069223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.649 [2024-05-15 15:44:37.138543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 15:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.025 00:06:40.025 real 0m1.395s 00:06:40.025 user 0m1.273s 00:06:40.025 sys 0m0.134s 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.025 15:44:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:40.025 ************************************ 00:06:40.025 END TEST accel_decomp_full_mthread 00:06:40.025 ************************************ 00:06:40.025 15:44:38 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:40.025 15:44:38 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.025 15:44:38 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:40.025 15:44:38 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:40.025 15:44:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.025 15:44:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.025 15:44:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.025 15:44:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.025 15:44:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.025 15:44:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.025 15:44:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.025 15:44:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:40.025 15:44:38 accel -- accel/accel.sh@41 -- # jq -r . 00:06:40.025 ************************************ 00:06:40.025 START TEST accel_dif_functional_tests 00:06:40.025 ************************************ 00:06:40.025 15:44:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.025 [2024-05-15 15:44:38.496620] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:40.025 [2024-05-15 15:44:38.496660] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584844 ] 00:06:40.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.025 [2024-05-15 15:44:38.562599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.284 [2024-05-15 15:44:38.634443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.284 [2024-05-15 15:44:38.634540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.284 [2024-05-15 15:44:38.634541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.284 00:06:40.284 00:06:40.284 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.284 http://cunit.sourceforge.net/ 00:06:40.284 00:06:40.284 00:06:40.284 Suite: accel_dif 00:06:40.284 Test: verify: DIF generated, GUARD check ...passed 00:06:40.284 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.284 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.284 Test: verify: DIF not generated, GUARD check ...[2024-05-15 15:44:38.703168] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.284 [2024-05-15 15:44:38.703221] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.284 passed 00:06:40.284 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 15:44:38.703262] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.284 [2024-05-15 15:44:38.703280] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.284 passed 00:06:40.284 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 15:44:38.703305] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.284 [2024-05-15 15:44:38.703324] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.284 passed 00:06:40.284 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.284 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 15:44:38.703376] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.284 passed 00:06:40.284 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.284 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.284 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:40.284 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 15:44:38.703499] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.284 passed 00:06:40.284 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.284 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:40.284 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.284 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:40.284 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.284 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.284 Test: generate copy: iovecs-len validate ...[2024-05-15 15:44:38.703680] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.284 passed 00:06:40.284 Test: generate copy: buffer alignment validate ...passed 00:06:40.284 00:06:40.284 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.284 suites 1 1 n/a 0 0 00:06:40.284 tests 20 20 20 0 0 00:06:40.284 asserts 204 204 204 0 n/a 00:06:40.284 00:06:40.284 Elapsed time = 0.002 seconds 00:06:40.545 00:06:40.545 real 0m0.439s 00:06:40.545 user 0m0.592s 00:06:40.545 sys 0m0.164s 00:06:40.545 15:44:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.545 15:44:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:40.545 ************************************ 00:06:40.545 END TEST accel_dif_functional_tests 00:06:40.545 ************************************ 00:06:40.545 00:06:40.545 real 0m32.493s 00:06:40.545 user 0m35.284s 00:06:40.545 sys 0m5.226s 00:06:40.545 15:44:38 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.545 15:44:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.545 ************************************ 00:06:40.545 END TEST accel 00:06:40.545 ************************************ 00:06:40.545 15:44:38 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.545 15:44:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:40.545 15:44:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.545 15:44:38 -- common/autotest_common.sh@10 -- # set +x 00:06:40.545 ************************************ 00:06:40.545 START TEST accel_rpc 00:06:40.545 ************************************ 00:06:40.545 15:44:39 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.805 * Looking for test storage... 00:06:40.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.805 15:44:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.805 15:44:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3584913 00:06:40.805 15:44:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.805 15:44:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3584913 00:06:40.805 15:44:39 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3584913 ']' 00:06:40.805 15:44:39 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.805 15:44:39 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.805 15:44:39 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.805 15:44:39 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.805 15:44:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.805 [2024-05-15 15:44:39.158857] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:40.805 [2024-05-15 15:44:39.158912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584913 ] 00:06:40.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.805 [2024-05-15 15:44:39.227661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.805 [2024-05-15 15:44:39.302319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.372 15:44:39 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.372 15:44:39 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:41.372 15:44:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:41.372 15:44:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:41.372 15:44:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:41.372 15:44:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:41.372 15:44:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:41.372 15:44:39 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.372 15:44:39 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.372 15:44:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.631 ************************************ 00:06:41.631 START TEST accel_assign_opcode 00:06:41.631 ************************************ 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.631 [2024-05-15 15:44:39.976368] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.631 [2024-05-15 15:44:39.984378] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.631 15:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.631 15:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.631 15:44:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.631 15:44:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.631 15:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.631 15:44:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.631 15:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.631 15:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.890 software 00:06:41.890 00:06:41.890 real 0m0.237s 00:06:41.890 user 0m0.043s 00:06:41.890 sys 0m0.013s 00:06:41.890 15:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.890 15:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.890 ************************************ 00:06:41.890 END TEST accel_assign_opcode 00:06:41.890 ************************************ 00:06:41.890 15:44:40 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3584913 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3584913 ']' 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3584913 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3584913 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.890 15:44:40 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3584913' 00:06:41.890 killing process with pid 3584913 00:06:41.891 15:44:40 accel_rpc -- common/autotest_common.sh@965 -- # kill 3584913 00:06:41.891 15:44:40 accel_rpc -- common/autotest_common.sh@970 -- # wait 3584913 00:06:42.158 00:06:42.158 real 0m1.621s 00:06:42.158 user 0m1.617s 00:06:42.158 sys 0m0.488s 00:06:42.158 15:44:40 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.158 15:44:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.158 ************************************ 00:06:42.158 END TEST accel_rpc 00:06:42.158 ************************************ 00:06:42.158 15:44:40 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.158 15:44:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.158 15:44:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.158 15:44:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.443 ************************************ 00:06:42.443 START TEST app_cmdline 00:06:42.443 ************************************ 00:06:42.443 15:44:40 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.443 * Looking for test storage... 00:06:42.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.443 15:44:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.443 15:44:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3585265 00:06:42.443 15:44:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3585265 00:06:42.443 15:44:40 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3585265 ']' 00:06:42.443 15:44:40 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.443 15:44:40 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.443 15:44:40 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.443 15:44:40 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.443 15:44:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.443 15:44:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.443 [2024-05-15 15:44:40.888533] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:06:42.443 [2024-05-15 15:44:40.888590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585265 ] 00:06:42.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.443 [2024-05-15 15:44:40.957488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.701 [2024-05-15 15:44:41.032722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.270 15:44:41 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.270 15:44:41 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:43.270 { 00:06:43.270 "version": "SPDK v24.05-pre git sha1 c3870302f", 00:06:43.270 "fields": { 00:06:43.270 "major": 24, 00:06:43.270 "minor": 5, 00:06:43.270 "patch": 0, 00:06:43.270 "suffix": "-pre", 00:06:43.270 "commit": "c3870302f" 00:06:43.270 } 00:06:43.270 } 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.270 15:44:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:43.270 15:44:41 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.270 15:44:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.529 15:44:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.529 15:44:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.529 15:44:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:43.529 15:44:41 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.529 request: 00:06:43.529 { 00:06:43.529 "method": "env_dpdk_get_mem_stats", 00:06:43.529 "req_id": 1 00:06:43.529 } 00:06:43.529 Got JSON-RPC error response 00:06:43.529 response: 00:06:43.529 { 00:06:43.529 "code": -32601, 00:06:43.529 "message": "Method not found" 00:06:43.529 } 00:06:43.529 15:44:42 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:43.529 15:44:42 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.529 15:44:42 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.529 15:44:42 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.529 15:44:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3585265 00:06:43.529 15:44:42 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3585265 ']' 00:06:43.529 15:44:42 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3585265 00:06:43.530 15:44:42 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:43.530 15:44:42 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.530 15:44:42 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3585265 00:06:43.789 15:44:42 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.789 15:44:42 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.789 15:44:42 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3585265' 00:06:43.789 killing process with pid 3585265 00:06:43.789 15:44:42 app_cmdline -- common/autotest_common.sh@965 -- # kill 3585265 00:06:43.789 15:44:42 app_cmdline -- common/autotest_common.sh@970 -- # wait 3585265 00:06:44.047 00:06:44.047 real 0m1.701s 00:06:44.047 user 0m1.964s 00:06:44.047 sys 0m0.488s 00:06:44.048 15:44:42 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.048 15:44:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.048 ************************************ 00:06:44.048 END TEST app_cmdline 00:06:44.048 ************************************ 00:06:44.048 15:44:42 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.048 15:44:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.048 15:44:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.048 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.048 ************************************ 00:06:44.048 START TEST version 00:06:44.048 ************************************ 00:06:44.048 15:44:42 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.048 * Looking for test storage... 00:06:44.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.306 15:44:42 version -- app/version.sh@17 -- # get_header_version major 00:06:44.306 15:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # cut -f2 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.306 15:44:42 version -- app/version.sh@17 -- # major=24 00:06:44.306 15:44:42 version -- app/version.sh@18 -- # get_header_version minor 00:06:44.306 15:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # cut -f2 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.306 15:44:42 version -- app/version.sh@18 -- # minor=5 00:06:44.306 15:44:42 version -- app/version.sh@19 -- # get_header_version patch 00:06:44.306 15:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # cut -f2 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.306 15:44:42 version -- app/version.sh@19 -- # patch=0 00:06:44.306 15:44:42 version -- app/version.sh@20 -- # get_header_version suffix 00:06:44.306 15:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # cut -f2 00:06:44.306 15:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.306 15:44:42 version -- app/version.sh@20 -- # suffix=-pre 00:06:44.306 15:44:42 version -- app/version.sh@22 -- # version=24.5 00:06:44.306 15:44:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.306 15:44:42 version -- app/version.sh@28 -- # version=24.5rc0 00:06:44.306 15:44:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:44.306 15:44:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.306 15:44:42 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:44.306 15:44:42 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:44.306 00:06:44.306 real 0m0.190s 00:06:44.306 user 0m0.091s 00:06:44.306 sys 0m0.147s 00:06:44.306 15:44:42 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.306 15:44:42 version -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 ************************************ 00:06:44.306 END TEST version 00:06:44.306 ************************************ 00:06:44.306 15:44:42 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@194 -- # uname -s 00:06:44.306 15:44:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:44.306 15:44:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.306 15:44:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.306 15:44:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:44.306 15:44:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.306 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 15:44:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:44.306 15:44:42 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:44.306 15:44:42 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.306 15:44:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:44.306 15:44:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.306 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 ************************************ 00:06:44.306 START TEST nvmf_tcp 00:06:44.306 ************************************ 00:06:44.306 15:44:42 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.564 * Looking for test storage... 00:06:44.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.564 15:44:42 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.564 15:44:42 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.564 15:44:42 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.564 15:44:42 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.565 15:44:42 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.565 15:44:42 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.565 15:44:42 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.565 15:44:42 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:44.565 15:44:42 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:44.565 15:44:42 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:44.565 15:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:44.565 15:44:42 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.565 15:44:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:44.565 15:44:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.565 15:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.565 ************************************ 00:06:44.565 START TEST nvmf_example 00:06:44.565 ************************************ 00:06:44.565 15:44:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.565 * Looking for test storage... 00:06:44.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.565 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.565 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.824 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:44.825 15:44:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.413 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:51.414 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:51.414 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:51.414 Found net devices under 0000:af:00.0: cvl_0_0 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:51.414 Found net devices under 0000:af:00.1: cvl_0_1 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.414 15:44:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:06:51.673 00:06:51.673 --- 10.0.0.2 ping statistics --- 00:06:51.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.673 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:06:51.673 00:06:51.673 --- 10.0.0.1 ping statistics --- 00:06:51.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.673 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3589164 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3589164 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3589164 ']' 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.673 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.674 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.674 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.674 15:44:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:51.674 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.610 15:44:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:52.610 15:44:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:52.610 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.817 Initializing NVMe Controllers 00:07:04.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:04.817 Initialization complete. Launching workers. 00:07:04.817 ======================================================== 00:07:04.817 Latency(us) 00:07:04.817 Device Information : IOPS MiB/s Average min max 00:07:04.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14406.30 56.27 4442.29 680.18 15450.28 00:07:04.817 ======================================================== 00:07:04.817 Total : 14406.30 56.27 4442.29 680.18 15450.28 00:07:04.817 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.817 rmmod nvme_tcp 00:07:04.817 rmmod nvme_fabrics 00:07:04.817 rmmod nvme_keyring 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3589164 ']' 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3589164 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3589164 ']' 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3589164 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3589164 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3589164' 00:07:04.817 killing process with pid 3589164 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3589164 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3589164 00:07:04.817 nvmf threads initialize successfully 00:07:04.817 bdev subsystem init successfully 00:07:04.817 created a nvmf target service 00:07:04.817 create targets's poll groups done 00:07:04.817 all subsystems of target started 00:07:04.817 nvmf target is running 00:07:04.817 all subsystems of target stopped 00:07:04.817 destroy targets's poll groups done 00:07:04.817 destroyed the nvmf target service 00:07:04.817 bdev subsystem finish successfully 00:07:04.817 nvmf threads destroy successfully 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.817 15:45:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.393 15:45:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.393 15:45:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:05.393 15:45:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.393 15:45:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.393 00:07:05.394 real 0m20.712s 00:07:05.394 user 0m45.412s 00:07:05.394 sys 0m7.428s 00:07:05.394 15:45:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.394 15:45:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.394 ************************************ 00:07:05.394 END TEST nvmf_example 00:07:05.394 ************************************ 00:07:05.394 15:45:03 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:05.394 15:45:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.394 15:45:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.394 15:45:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.394 ************************************ 00:07:05.394 START TEST nvmf_filesystem 00:07:05.394 ************************************ 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:05.394 * Looking for test storage... 00:07:05.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:05.394 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:05.394 #define SPDK_CONFIG_H 00:07:05.394 #define SPDK_CONFIG_APPS 1 00:07:05.394 #define SPDK_CONFIG_ARCH native 00:07:05.394 #undef SPDK_CONFIG_ASAN 00:07:05.394 #undef SPDK_CONFIG_AVAHI 00:07:05.394 #undef SPDK_CONFIG_CET 00:07:05.394 #define SPDK_CONFIG_COVERAGE 1 00:07:05.394 #define SPDK_CONFIG_CROSS_PREFIX 00:07:05.394 #undef SPDK_CONFIG_CRYPTO 00:07:05.394 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:05.394 #undef SPDK_CONFIG_CUSTOMOCF 00:07:05.394 #undef SPDK_CONFIG_DAOS 00:07:05.394 #define SPDK_CONFIG_DAOS_DIR 00:07:05.394 #define SPDK_CONFIG_DEBUG 1 00:07:05.394 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:05.394 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:05.394 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:05.394 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:05.394 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:05.394 #undef SPDK_CONFIG_DPDK_UADK 00:07:05.394 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.394 #define SPDK_CONFIG_EXAMPLES 1 00:07:05.394 #undef SPDK_CONFIG_FC 00:07:05.394 #define SPDK_CONFIG_FC_PATH 00:07:05.394 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:05.394 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:05.394 #undef SPDK_CONFIG_FUSE 00:07:05.394 #undef SPDK_CONFIG_FUZZER 00:07:05.394 #define SPDK_CONFIG_FUZZER_LIB 00:07:05.394 #undef SPDK_CONFIG_GOLANG 00:07:05.394 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:05.394 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:05.394 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:05.394 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:05.394 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:05.394 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:05.394 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:05.394 #define SPDK_CONFIG_IDXD 1 00:07:05.394 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:05.394 #undef SPDK_CONFIG_IPSEC_MB 00:07:05.394 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:05.394 #define SPDK_CONFIG_ISAL 1 00:07:05.394 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:05.394 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:05.395 #define SPDK_CONFIG_LIBDIR 00:07:05.395 #undef SPDK_CONFIG_LTO 00:07:05.395 #define SPDK_CONFIG_MAX_LCORES 00:07:05.395 #define SPDK_CONFIG_NVME_CUSE 1 00:07:05.395 #undef SPDK_CONFIG_OCF 00:07:05.395 #define SPDK_CONFIG_OCF_PATH 00:07:05.395 #define SPDK_CONFIG_OPENSSL_PATH 00:07:05.395 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:05.395 #define SPDK_CONFIG_PGO_DIR 00:07:05.395 #undef SPDK_CONFIG_PGO_USE 00:07:05.395 #define SPDK_CONFIG_PREFIX /usr/local 00:07:05.395 #undef SPDK_CONFIG_RAID5F 00:07:05.395 #undef SPDK_CONFIG_RBD 00:07:05.395 #define SPDK_CONFIG_RDMA 1 00:07:05.395 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:05.395 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:05.395 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:05.395 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:05.395 #define SPDK_CONFIG_SHARED 1 00:07:05.395 #undef SPDK_CONFIG_SMA 00:07:05.395 #define SPDK_CONFIG_TESTS 1 00:07:05.395 #undef SPDK_CONFIG_TSAN 00:07:05.395 #define SPDK_CONFIG_UBLK 1 00:07:05.395 #define SPDK_CONFIG_UBSAN 1 00:07:05.395 #undef SPDK_CONFIG_UNIT_TESTS 00:07:05.395 #undef SPDK_CONFIG_URING 00:07:05.395 #define SPDK_CONFIG_URING_PATH 00:07:05.395 #undef SPDK_CONFIG_URING_ZNS 00:07:05.395 #undef SPDK_CONFIG_USDT 00:07:05.395 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:05.395 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:05.395 #define SPDK_CONFIG_VFIO_USER 1 00:07:05.395 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:05.395 #define SPDK_CONFIG_VHOST 1 00:07:05.395 #define SPDK_CONFIG_VIRTIO 1 00:07:05.395 #undef SPDK_CONFIG_VTUNE 00:07:05.395 #define SPDK_CONFIG_VTUNE_DIR 00:07:05.395 #define SPDK_CONFIG_WERROR 1 00:07:05.395 #define SPDK_CONFIG_WPDK_DIR 00:07:05.395 #undef SPDK_CONFIG_XNVME 00:07:05.395 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:05.395 15:45:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:05.395 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:05.656 15:45:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.656 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3591738 ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3591738 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.B72RP3 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.B72RP3/tests/target /tmp/spdk.B72RP3 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=972304384 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4312125440 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52304707584 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742292992 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9437585408 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30867771392 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871146496 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12339077120 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9383936 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30869704704 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871146496 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1441792 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:05.657 * Looking for test storage... 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52304707584 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11652177920 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:05.657 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.658 15:45:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:12.279 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.279 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:12.279 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:12.280 Found net devices under 0000:af:00.0: cvl_0_0 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:12.280 Found net devices under 0000:af:00.1: cvl_0_1 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.280 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:12.539 00:07:12.539 --- 10.0.0.2 ping statistics --- 00:07:12.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.539 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:07:12.539 00:07:12.539 --- 10.0.0.1 ping statistics --- 00:07:12.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.539 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.539 ************************************ 00:07:12.539 START TEST nvmf_filesystem_no_in_capsule 00:07:12.539 ************************************ 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:12.539 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3595502 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3595502 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3595502 ']' 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.540 15:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.540 [2024-05-15 15:45:11.030060] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:07:12.540 [2024-05-15 15:45:11.030105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.540 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.540 [2024-05-15 15:45:11.103179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.799 [2024-05-15 15:45:11.176148] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.799 [2024-05-15 15:45:11.176197] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.799 [2024-05-15 15:45:11.176210] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.799 [2024-05-15 15:45:11.176221] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.799 [2024-05-15 15:45:11.176231] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.799 [2024-05-15 15:45:11.176328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.799 [2024-05-15 15:45:11.176425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.799 [2024-05-15 15:45:11.176509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.799 [2024-05-15 15:45:11.176512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.368 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.369 [2024-05-15 15:45:11.884073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.369 15:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.629 Malloc1 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.629 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.629 [2024-05-15 15:45:12.031142] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:13.629 [2024-05-15 15:45:12.031472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:13.630 { 00:07:13.630 "name": "Malloc1", 00:07:13.630 "aliases": [ 00:07:13.630 "d8bab9c6-1962-4341-aa34-109a83f80326" 00:07:13.630 ], 00:07:13.630 "product_name": "Malloc disk", 00:07:13.630 "block_size": 512, 00:07:13.630 "num_blocks": 1048576, 00:07:13.630 "uuid": "d8bab9c6-1962-4341-aa34-109a83f80326", 00:07:13.630 "assigned_rate_limits": { 00:07:13.630 "rw_ios_per_sec": 0, 00:07:13.630 "rw_mbytes_per_sec": 0, 00:07:13.630 "r_mbytes_per_sec": 0, 00:07:13.630 "w_mbytes_per_sec": 0 00:07:13.630 }, 00:07:13.630 "claimed": true, 00:07:13.630 "claim_type": "exclusive_write", 00:07:13.630 "zoned": false, 00:07:13.630 "supported_io_types": { 00:07:13.630 "read": true, 00:07:13.630 "write": true, 00:07:13.630 "unmap": true, 00:07:13.630 "write_zeroes": true, 00:07:13.630 "flush": true, 00:07:13.630 "reset": true, 00:07:13.630 "compare": false, 00:07:13.630 "compare_and_write": false, 00:07:13.630 "abort": true, 00:07:13.630 "nvme_admin": false, 00:07:13.630 "nvme_io": false 00:07:13.630 }, 00:07:13.630 "memory_domains": [ 00:07:13.630 { 00:07:13.630 "dma_device_id": "system", 00:07:13.630 "dma_device_type": 1 00:07:13.630 }, 00:07:13.630 { 00:07:13.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.630 "dma_device_type": 2 00:07:13.630 } 00:07:13.630 ], 00:07:13.630 "driver_specific": {} 00:07:13.630 } 00:07:13.630 ]' 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:13.630 15:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.008 15:45:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.008 15:45:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:15.008 15:45:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.008 15:45:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:15.008 15:45:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:17.542 15:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:18.109 15:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.046 ************************************ 00:07:19.046 START TEST filesystem_ext4 00:07:19.046 ************************************ 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:19.046 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:19.046 mke2fs 1.46.5 (30-Dec-2021) 00:07:19.306 Discarding device blocks: 0/522240 done 00:07:19.306 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:19.306 Filesystem UUID: 5598eb59-5402-40aa-bd39-503b0a22ebab 00:07:19.306 Superblock backups stored on blocks: 00:07:19.306 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:19.306 00:07:19.306 Allocating group tables: 0/64 done 00:07:19.306 Writing inode tables: 0/64 done 00:07:19.306 Creating journal (8192 blocks): done 00:07:19.306 Writing superblocks and filesystem accounting information: 0/64 done 00:07:19.306 00:07:19.306 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:19.306 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.565 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.565 15:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:19.565 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.565 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:19.565 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:19.565 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3595502 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.566 00:07:19.566 real 0m0.519s 00:07:19.566 user 0m0.032s 00:07:19.566 sys 0m0.073s 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:19.566 ************************************ 00:07:19.566 END TEST filesystem_ext4 00:07:19.566 ************************************ 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.566 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.826 ************************************ 00:07:19.826 START TEST filesystem_btrfs 00:07:19.826 ************************************ 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:19.826 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:20.084 btrfs-progs v6.6.2 00:07:20.084 See https://btrfs.readthedocs.io for more information. 00:07:20.084 00:07:20.084 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:20.084 NOTE: several default settings have changed in version 5.15, please make sure 00:07:20.084 this does not affect your deployments: 00:07:20.084 - DUP for metadata (-m dup) 00:07:20.084 - enabled no-holes (-O no-holes) 00:07:20.084 - enabled free-space-tree (-R free-space-tree) 00:07:20.084 00:07:20.084 Label: (null) 00:07:20.084 UUID: 6e75d998-d81b-409c-9f96-720880e7a1c3 00:07:20.084 Node size: 16384 00:07:20.084 Sector size: 4096 00:07:20.084 Filesystem size: 510.00MiB 00:07:20.084 Block group profiles: 00:07:20.084 Data: single 8.00MiB 00:07:20.084 Metadata: DUP 32.00MiB 00:07:20.084 System: DUP 8.00MiB 00:07:20.085 SSD detected: yes 00:07:20.085 Zoned device: no 00:07:20.085 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:20.085 Runtime features: free-space-tree 00:07:20.085 Checksum: crc32c 00:07:20.085 Number of devices: 1 00:07:20.085 Devices: 00:07:20.085 ID SIZE PATH 00:07:20.085 1 510.00MiB /dev/nvme0n1p1 00:07:20.085 00:07:20.085 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:20.085 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.343 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3595502 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.344 00:07:20.344 real 0m0.681s 00:07:20.344 user 0m0.039s 00:07:20.344 sys 0m0.131s 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:20.344 ************************************ 00:07:20.344 END TEST filesystem_btrfs 00:07:20.344 ************************************ 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.344 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.602 ************************************ 00:07:20.602 START TEST filesystem_xfs 00:07:20.602 ************************************ 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:20.603 15:45:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:20.603 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:20.603 = sectsz=512 attr=2, projid32bit=1 00:07:20.603 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:20.603 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:20.603 data = bsize=4096 blocks=130560, imaxpct=25 00:07:20.603 = sunit=0 swidth=0 blks 00:07:20.603 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:20.603 log =internal log bsize=4096 blocks=16384, version=2 00:07:20.603 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:20.603 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:21.172 Discarding blocks...Done. 00:07:21.172 15:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:21.172 15:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3595502 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.460 00:07:24.460 real 0m3.439s 00:07:24.460 user 0m0.031s 00:07:24.460 sys 0m0.083s 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:24.460 ************************************ 00:07:24.460 END TEST filesystem_xfs 00:07:24.460 ************************************ 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3595502 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3595502 ']' 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3595502 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3595502 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3595502' 00:07:24.460 killing process with pid 3595502 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3595502 00:07:24.460 [2024-05-15 15:45:22.668867] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:24.460 15:45:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3595502 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:24.719 00:07:24.719 real 0m12.057s 00:07:24.719 user 0m46.865s 00:07:24.719 sys 0m1.846s 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.719 ************************************ 00:07:24.719 END TEST nvmf_filesystem_no_in_capsule 00:07:24.719 ************************************ 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.719 ************************************ 00:07:24.719 START TEST nvmf_filesystem_in_capsule 00:07:24.719 ************************************ 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3597848 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3597848 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3597848 ']' 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.719 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.719 [2024-05-15 15:45:23.156581] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:07:24.719 [2024-05-15 15:45:23.156623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.719 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.719 [2024-05-15 15:45:23.228667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.979 [2024-05-15 15:45:23.305876] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.979 [2024-05-15 15:45:23.305914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.979 [2024-05-15 15:45:23.305928] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.979 [2024-05-15 15:45:23.305939] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.979 [2024-05-15 15:45:23.305949] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.979 [2024-05-15 15:45:23.306004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.979 [2024-05-15 15:45:23.306022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.979 [2024-05-15 15:45:23.306110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.979 [2024-05-15 15:45:23.306114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.589 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:25.589 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:25.589 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.589 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.589 15:45:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.589 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.589 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:25.589 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:25.589 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.589 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.589 [2024-05-15 15:45:24.025065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.589 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.589 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:25.590 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.590 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.590 Malloc1 00:07:25.590 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.590 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:25.590 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.590 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 [2024-05-15 15:45:24.171352] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:25.849 [2024-05-15 15:45:24.171677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:25.849 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:25.850 { 00:07:25.850 "name": "Malloc1", 00:07:25.850 "aliases": [ 00:07:25.850 "14e79c18-4cab-470d-bcf6-3a603278a2da" 00:07:25.850 ], 00:07:25.850 "product_name": "Malloc disk", 00:07:25.850 "block_size": 512, 00:07:25.850 "num_blocks": 1048576, 00:07:25.850 "uuid": "14e79c18-4cab-470d-bcf6-3a603278a2da", 00:07:25.850 "assigned_rate_limits": { 00:07:25.850 "rw_ios_per_sec": 0, 00:07:25.850 "rw_mbytes_per_sec": 0, 00:07:25.850 "r_mbytes_per_sec": 0, 00:07:25.850 "w_mbytes_per_sec": 0 00:07:25.850 }, 00:07:25.850 "claimed": true, 00:07:25.850 "claim_type": "exclusive_write", 00:07:25.850 "zoned": false, 00:07:25.850 "supported_io_types": { 00:07:25.850 "read": true, 00:07:25.850 "write": true, 00:07:25.850 "unmap": true, 00:07:25.850 "write_zeroes": true, 00:07:25.850 "flush": true, 00:07:25.850 "reset": true, 00:07:25.850 "compare": false, 00:07:25.850 "compare_and_write": false, 00:07:25.850 "abort": true, 00:07:25.850 "nvme_admin": false, 00:07:25.850 "nvme_io": false 00:07:25.850 }, 00:07:25.850 "memory_domains": [ 00:07:25.850 { 00:07:25.850 "dma_device_id": "system", 00:07:25.850 "dma_device_type": 1 00:07:25.850 }, 00:07:25.850 { 00:07:25.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.850 "dma_device_type": 2 00:07:25.850 } 00:07:25.850 ], 00:07:25.850 "driver_specific": {} 00:07:25.850 } 00:07:25.850 ]' 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:25.850 15:45:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:27.227 15:45:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.227 15:45:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:27.227 15:45:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.227 15:45:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:27.227 15:45:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:29.166 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:29.424 15:45:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:29.682 15:45:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.618 ************************************ 00:07:30.618 START TEST filesystem_in_capsule_ext4 00:07:30.618 ************************************ 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:30.618 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:30.877 mke2fs 1.46.5 (30-Dec-2021) 00:07:30.877 Discarding device blocks: 0/522240 done 00:07:30.877 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:30.877 Filesystem UUID: 1af84580-163f-4f6e-9cd7-e83249eb3a2f 00:07:30.877 Superblock backups stored on blocks: 00:07:30.877 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:30.877 00:07:30.877 Allocating group tables: 0/64 done 00:07:30.877 Writing inode tables: 0/64 done 00:07:30.877 Creating journal (8192 blocks): done 00:07:30.877 Writing superblocks and filesystem accounting information: 0/64 done 00:07:30.877 00:07:30.877 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:30.877 15:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3597848 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.814 00:07:31.814 real 0m1.193s 00:07:31.814 user 0m0.023s 00:07:31.814 sys 0m0.080s 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.814 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:31.814 ************************************ 00:07:31.814 END TEST filesystem_in_capsule_ext4 00:07:31.814 ************************************ 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.073 ************************************ 00:07:32.073 START TEST filesystem_in_capsule_btrfs 00:07:32.073 ************************************ 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:32.073 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:32.331 btrfs-progs v6.6.2 00:07:32.331 See https://btrfs.readthedocs.io for more information. 00:07:32.331 00:07:32.331 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:32.331 NOTE: several default settings have changed in version 5.15, please make sure 00:07:32.331 this does not affect your deployments: 00:07:32.331 - DUP for metadata (-m dup) 00:07:32.331 - enabled no-holes (-O no-holes) 00:07:32.331 - enabled free-space-tree (-R free-space-tree) 00:07:32.331 00:07:32.331 Label: (null) 00:07:32.331 UUID: 1d2ea1f7-e84e-4834-a970-e2252c8fb0f8 00:07:32.331 Node size: 16384 00:07:32.331 Sector size: 4096 00:07:32.331 Filesystem size: 510.00MiB 00:07:32.331 Block group profiles: 00:07:32.331 Data: single 8.00MiB 00:07:32.331 Metadata: DUP 32.00MiB 00:07:32.331 System: DUP 8.00MiB 00:07:32.332 SSD detected: yes 00:07:32.332 Zoned device: no 00:07:32.332 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:32.332 Runtime features: free-space-tree 00:07:32.332 Checksum: crc32c 00:07:32.332 Number of devices: 1 00:07:32.332 Devices: 00:07:32.332 ID SIZE PATH 00:07:32.332 1 510.00MiB /dev/nvme0n1p1 00:07:32.332 00:07:32.332 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:32.332 15:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3597848 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.707 00:07:33.707 real 0m1.470s 00:07:33.707 user 0m0.031s 00:07:33.707 sys 0m0.141s 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.707 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:33.708 ************************************ 00:07:33.708 END TEST filesystem_in_capsule_btrfs 00:07:33.708 ************************************ 00:07:33.708 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:33.708 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:33.708 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.708 15:45:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.708 ************************************ 00:07:33.708 START TEST filesystem_in_capsule_xfs 00:07:33.708 ************************************ 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:33.708 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:33.708 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:33.708 = sectsz=512 attr=2, projid32bit=1 00:07:33.708 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:33.708 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:33.708 data = bsize=4096 blocks=130560, imaxpct=25 00:07:33.708 = sunit=0 swidth=0 blks 00:07:33.708 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:33.708 log =internal log bsize=4096 blocks=16384, version=2 00:07:33.708 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:33.708 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:34.643 Discarding blocks...Done. 00:07:34.643 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:34.643 15:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3597848 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.176 00:07:37.176 real 0m3.503s 00:07:37.176 user 0m0.027s 00:07:37.176 sys 0m0.087s 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:37.176 ************************************ 00:07:37.176 END TEST filesystem_in_capsule_xfs 00:07:37.176 ************************************ 00:07:37.176 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:37.434 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:37.435 15:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.694 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3597848 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3597848 ']' 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3597848 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3597848 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3597848' 00:07:37.695 killing process with pid 3597848 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3597848 00:07:37.695 [2024-05-15 15:45:36.118285] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:37.695 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3597848 00:07:37.954 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:37.954 00:07:37.954 real 0m13.366s 00:07:37.954 user 0m52.105s 00:07:37.954 sys 0m1.865s 00:07:37.954 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.954 15:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.954 ************************************ 00:07:37.954 END TEST nvmf_filesystem_in_capsule 00:07:37.954 ************************************ 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:38.213 rmmod nvme_tcp 00:07:38.213 rmmod nvme_fabrics 00:07:38.213 rmmod nvme_keyring 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:38.213 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.214 15:45:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.126 15:45:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.126 00:07:40.126 real 0m34.842s 00:07:40.126 user 1m41.070s 00:07:40.126 sys 0m9.082s 00:07:40.126 15:45:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.126 15:45:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.126 ************************************ 00:07:40.126 END TEST nvmf_filesystem 00:07:40.126 ************************************ 00:07:40.385 15:45:38 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:40.385 15:45:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:40.385 15:45:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.385 15:45:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.385 ************************************ 00:07:40.385 START TEST nvmf_target_discovery 00:07:40.385 ************************************ 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:40.385 * Looking for test storage... 00:07:40.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.385 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.386 15:45:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.951 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:46.952 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:46.952 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:46.952 Found net devices under 0000:af:00.0: cvl_0_0 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:46.952 Found net devices under 0000:af:00.1: cvl_0_1 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:07:46.952 00:07:46.952 --- 10.0.0.2 ping statistics --- 00:07:46.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.952 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:07:46.952 00:07:46.952 --- 10.0.0.1 ping statistics --- 00:07:46.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.952 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3603895 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3603895 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3603895 ']' 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.952 15:45:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [2024-05-15 15:45:45.514550] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:07:46.952 [2024-05-15 15:45:45.514597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.211 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.211 [2024-05-15 15:45:45.588141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.211 [2024-05-15 15:45:45.663966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.211 [2024-05-15 15:45:45.664003] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.211 [2024-05-15 15:45:45.664016] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.211 [2024-05-15 15:45:45.664027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.211 [2024-05-15 15:45:45.664036] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.211 [2024-05-15 15:45:45.664086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.211 [2024-05-15 15:45:45.664181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.211 [2024-05-15 15:45:45.664266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.211 [2024-05-15 15:45:45.664270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.828 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.828 [2024-05-15 15:45:46.383169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.098 Null1 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.098 [2024-05-15 15:45:46.435291] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:48.098 [2024-05-15 15:45:46.435523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.098 Null2 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:48.098 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 Null3 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 Null4 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:07:48.099 00:07:48.099 Discovery Log Number of Records 6, Generation counter 6 00:07:48.099 =====Discovery Log Entry 0====== 00:07:48.099 trtype: tcp 00:07:48.099 adrfam: ipv4 00:07:48.099 subtype: current discovery subsystem 00:07:48.099 treq: not required 00:07:48.099 portid: 0 00:07:48.099 trsvcid: 4420 00:07:48.099 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:48.099 traddr: 10.0.0.2 00:07:48.099 eflags: explicit discovery connections, duplicate discovery information 00:07:48.099 sectype: none 00:07:48.099 =====Discovery Log Entry 1====== 00:07:48.099 trtype: tcp 00:07:48.099 adrfam: ipv4 00:07:48.099 subtype: nvme subsystem 00:07:48.099 treq: not required 00:07:48.099 portid: 0 00:07:48.099 trsvcid: 4420 00:07:48.099 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:48.099 traddr: 10.0.0.2 00:07:48.099 eflags: none 00:07:48.099 sectype: none 00:07:48.099 =====Discovery Log Entry 2====== 00:07:48.099 trtype: tcp 00:07:48.099 adrfam: ipv4 00:07:48.099 subtype: nvme subsystem 00:07:48.099 treq: not required 00:07:48.099 portid: 0 00:07:48.099 trsvcid: 4420 00:07:48.099 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:48.099 traddr: 10.0.0.2 00:07:48.099 eflags: none 00:07:48.099 sectype: none 00:07:48.099 =====Discovery Log Entry 3====== 00:07:48.099 trtype: tcp 00:07:48.099 adrfam: ipv4 00:07:48.099 subtype: nvme subsystem 00:07:48.099 treq: not required 00:07:48.099 portid: 0 00:07:48.099 trsvcid: 4420 00:07:48.099 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:48.099 traddr: 10.0.0.2 00:07:48.099 eflags: none 00:07:48.099 sectype: none 00:07:48.099 =====Discovery Log Entry 4====== 00:07:48.099 trtype: tcp 00:07:48.099 adrfam: ipv4 00:07:48.099 subtype: nvme subsystem 00:07:48.099 treq: not required 00:07:48.099 portid: 0 00:07:48.099 trsvcid: 4420 00:07:48.099 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:48.099 traddr: 10.0.0.2 00:07:48.099 eflags: none 00:07:48.099 sectype: none 00:07:48.099 =====Discovery Log Entry 5====== 00:07:48.099 trtype: tcp 00:07:48.099 adrfam: ipv4 00:07:48.099 subtype: discovery subsystem referral 00:07:48.099 treq: not required 00:07:48.099 portid: 0 00:07:48.099 trsvcid: 4430 00:07:48.099 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:48.099 traddr: 10.0.0.2 00:07:48.099 eflags: none 00:07:48.099 sectype: none 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:48.099 Perform nvmf subsystem discovery via RPC 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.099 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 [ 00:07:48.099 { 00:07:48.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:48.099 "subtype": "Discovery", 00:07:48.099 "listen_addresses": [ 00:07:48.099 { 00:07:48.099 "trtype": "TCP", 00:07:48.099 "adrfam": "IPv4", 00:07:48.099 "traddr": "10.0.0.2", 00:07:48.099 "trsvcid": "4420" 00:07:48.099 } 00:07:48.099 ], 00:07:48.099 "allow_any_host": true, 00:07:48.099 "hosts": [] 00:07:48.099 }, 00:07:48.099 { 00:07:48.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.099 "subtype": "NVMe", 00:07:48.099 "listen_addresses": [ 00:07:48.099 { 00:07:48.099 "trtype": "TCP", 00:07:48.099 "adrfam": "IPv4", 00:07:48.099 "traddr": "10.0.0.2", 00:07:48.099 "trsvcid": "4420" 00:07:48.099 } 00:07:48.099 ], 00:07:48.099 "allow_any_host": true, 00:07:48.099 "hosts": [], 00:07:48.099 "serial_number": "SPDK00000000000001", 00:07:48.099 "model_number": "SPDK bdev Controller", 00:07:48.099 "max_namespaces": 32, 00:07:48.099 "min_cntlid": 1, 00:07:48.099 "max_cntlid": 65519, 00:07:48.099 "namespaces": [ 00:07:48.099 { 00:07:48.099 "nsid": 1, 00:07:48.099 "bdev_name": "Null1", 00:07:48.099 "name": "Null1", 00:07:48.099 "nguid": "389ECEB1287B4828850C494B150425C2", 00:07:48.099 "uuid": "389eceb1-287b-4828-850c-494b150425c2" 00:07:48.099 } 00:07:48.099 ] 00:07:48.099 }, 00:07:48.358 { 00:07:48.358 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:48.358 "subtype": "NVMe", 00:07:48.359 "listen_addresses": [ 00:07:48.359 { 00:07:48.359 "trtype": "TCP", 00:07:48.359 "adrfam": "IPv4", 00:07:48.359 "traddr": "10.0.0.2", 00:07:48.359 "trsvcid": "4420" 00:07:48.359 } 00:07:48.359 ], 00:07:48.359 "allow_any_host": true, 00:07:48.359 "hosts": [], 00:07:48.359 "serial_number": "SPDK00000000000002", 00:07:48.359 "model_number": "SPDK bdev Controller", 00:07:48.359 "max_namespaces": 32, 00:07:48.359 "min_cntlid": 1, 00:07:48.359 "max_cntlid": 65519, 00:07:48.359 "namespaces": [ 00:07:48.359 { 00:07:48.359 "nsid": 1, 00:07:48.359 "bdev_name": "Null2", 00:07:48.359 "name": "Null2", 00:07:48.359 "nguid": "57218F288697408AB7CBAA511E940178", 00:07:48.359 "uuid": "57218f28-8697-408a-b7cb-aa511e940178" 00:07:48.359 } 00:07:48.359 ] 00:07:48.359 }, 00:07:48.359 { 00:07:48.359 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:48.359 "subtype": "NVMe", 00:07:48.359 "listen_addresses": [ 00:07:48.359 { 00:07:48.359 "trtype": "TCP", 00:07:48.359 "adrfam": "IPv4", 00:07:48.359 "traddr": "10.0.0.2", 00:07:48.359 "trsvcid": "4420" 00:07:48.359 } 00:07:48.359 ], 00:07:48.359 "allow_any_host": true, 00:07:48.359 "hosts": [], 00:07:48.359 "serial_number": "SPDK00000000000003", 00:07:48.359 "model_number": "SPDK bdev Controller", 00:07:48.359 "max_namespaces": 32, 00:07:48.359 "min_cntlid": 1, 00:07:48.359 "max_cntlid": 65519, 00:07:48.359 "namespaces": [ 00:07:48.359 { 00:07:48.359 "nsid": 1, 00:07:48.359 "bdev_name": "Null3", 00:07:48.359 "name": "Null3", 00:07:48.359 "nguid": "40D18C8A153F459BA7A9E02FAC1BF347", 00:07:48.359 "uuid": "40d18c8a-153f-459b-a7a9-e02fac1bf347" 00:07:48.359 } 00:07:48.359 ] 00:07:48.359 }, 00:07:48.359 { 00:07:48.359 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:48.359 "subtype": "NVMe", 00:07:48.359 "listen_addresses": [ 00:07:48.359 { 00:07:48.359 "trtype": "TCP", 00:07:48.359 "adrfam": "IPv4", 00:07:48.359 "traddr": "10.0.0.2", 00:07:48.359 "trsvcid": "4420" 00:07:48.359 } 00:07:48.359 ], 00:07:48.359 "allow_any_host": true, 00:07:48.359 "hosts": [], 00:07:48.359 "serial_number": "SPDK00000000000004", 00:07:48.359 "model_number": "SPDK bdev Controller", 00:07:48.359 "max_namespaces": 32, 00:07:48.359 "min_cntlid": 1, 00:07:48.359 "max_cntlid": 65519, 00:07:48.359 "namespaces": [ 00:07:48.359 { 00:07:48.359 "nsid": 1, 00:07:48.359 "bdev_name": "Null4", 00:07:48.359 "name": "Null4", 00:07:48.359 "nguid": "E0279D1E49CE44A8A2C4649B3310BA75", 00:07:48.359 "uuid": "e0279d1e-49ce-44a8-a2c4-649b3310ba75" 00:07:48.359 } 00:07:48.359 ] 00:07:48.359 } 00:07:48.359 ] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.359 rmmod nvme_tcp 00:07:48.359 rmmod nvme_fabrics 00:07:48.359 rmmod nvme_keyring 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3603895 ']' 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3603895 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3603895 ']' 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3603895 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.359 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3603895 00:07:48.619 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.619 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.619 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3603895' 00:07:48.619 killing process with pid 3603895 00:07:48.619 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3603895 00:07:48.619 [2024-05-15 15:45:46.935987] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:48.619 15:45:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3603895 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.619 15:45:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.157 15:45:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.157 00:07:51.157 real 0m10.447s 00:07:51.157 user 0m7.737s 00:07:51.157 sys 0m5.415s 00:07:51.157 15:45:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.157 15:45:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.157 ************************************ 00:07:51.157 END TEST nvmf_target_discovery 00:07:51.157 ************************************ 00:07:51.157 15:45:49 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:51.157 15:45:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:51.157 15:45:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.157 15:45:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.157 ************************************ 00:07:51.157 START TEST nvmf_referrals 00:07:51.157 ************************************ 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:51.157 * Looking for test storage... 00:07:51.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.157 15:45:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.158 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.158 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.158 15:45:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.158 15:45:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:57.729 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:57.729 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:57.729 Found net devices under 0000:af:00.0: cvl_0_0 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:57.729 Found net devices under 0000:af:00.1: cvl_0_1 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.729 15:45:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.729 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.729 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.729 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:07:57.730 00:07:57.730 --- 10.0.0.2 ping statistics --- 00:07:57.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.730 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:07:57.730 00:07:57.730 --- 10.0.0.1 ping statistics --- 00:07:57.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.730 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3607882 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3607882 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3607882 ']' 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:57.730 15:45:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 [2024-05-15 15:45:56.242570] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:07:57.730 [2024-05-15 15:45:56.242617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.730 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.989 [2024-05-15 15:45:56.318812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.989 [2024-05-15 15:45:56.392245] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.989 [2024-05-15 15:45:56.392286] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.989 [2024-05-15 15:45:56.392301] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.989 [2024-05-15 15:45:56.392312] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.989 [2024-05-15 15:45:56.392323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.989 [2024-05-15 15:45:56.392397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.989 [2024-05-15 15:45:56.392483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.989 [2024-05-15 15:45:56.392568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.989 [2024-05-15 15:45:56.392572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.556 [2024-05-15 15:45:57.102052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.556 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:58.557 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.557 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.557 [2024-05-15 15:45:57.118052] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:58.557 [2024-05-15 15:45:57.118280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.816 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:59.075 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:59.076 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:59.076 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:59.076 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.076 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.076 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:59.076 15:45:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.334 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:59.334 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.335 15:45:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:59.594 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.853 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.112 rmmod nvme_tcp 00:08:00.112 rmmod nvme_fabrics 00:08:00.112 rmmod nvme_keyring 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3607882 ']' 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3607882 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3607882 ']' 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3607882 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:00.112 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3607882 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3607882' 00:08:00.372 killing process with pid 3607882 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3607882 00:08:00.372 [2024-05-15 15:45:58.719900] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3607882 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.372 15:45:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.908 15:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.908 00:08:02.908 real 0m11.691s 00:08:02.908 user 0m12.675s 00:08:02.908 sys 0m5.937s 00:08:02.908 15:46:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.908 15:46:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.908 ************************************ 00:08:02.908 END TEST nvmf_referrals 00:08:02.908 ************************************ 00:08:02.909 15:46:01 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:02.909 15:46:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:02.909 15:46:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.909 15:46:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 ************************************ 00:08:02.909 START TEST nvmf_connect_disconnect 00:08:02.909 ************************************ 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:02.909 * Looking for test storage... 00:08:02.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.909 15:46:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:09.494 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:09.494 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:09.494 Found net devices under 0000:af:00.0: cvl_0_0 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:09.494 Found net devices under 0000:af:00.1: cvl_0_1 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.494 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:09.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:08:09.494 00:08:09.494 --- 10.0.0.2 ping statistics --- 00:08:09.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.494 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:08:09.495 00:08:09.495 --- 10.0.0.1 ping statistics --- 00:08:09.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.495 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3611964 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3611964 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3611964 ']' 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.495 15:46:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:09.495 [2024-05-15 15:46:07.919442] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:08:09.495 [2024-05-15 15:46:07.919496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.495 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.495 [2024-05-15 15:46:07.995661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.754 [2024-05-15 15:46:08.071838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.754 [2024-05-15 15:46:08.071872] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.754 [2024-05-15 15:46:08.071886] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.754 [2024-05-15 15:46:08.071897] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.754 [2024-05-15 15:46:08.071907] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.754 [2024-05-15 15:46:08.071953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.754 [2024-05-15 15:46:08.072046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.754 [2024-05-15 15:46:08.072136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.754 [2024-05-15 15:46:08.072140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.394 [2024-05-15 15:46:08.777096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.394 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.395 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.395 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.395 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.395 [2024-05-15 15:46:08.831743] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:10.395 [2024-05-15 15:46:08.832024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.395 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.395 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:10.395 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:10.395 15:46:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:14.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.746 rmmod nvme_tcp 00:08:27.746 rmmod nvme_fabrics 00:08:27.746 rmmod nvme_keyring 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3611964 ']' 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3611964 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3611964 ']' 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3611964 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3611964 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3611964' 00:08:27.746 killing process with pid 3611964 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3611964 00:08:27.746 [2024-05-15 15:46:26.220150] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:27.746 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3611964 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.005 15:46:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.544 15:46:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.544 00:08:30.544 real 0m27.426s 00:08:30.544 user 1m14.212s 00:08:30.544 sys 0m6.997s 00:08:30.544 15:46:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:30.544 15:46:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.544 ************************************ 00:08:30.544 END TEST nvmf_connect_disconnect 00:08:30.544 ************************************ 00:08:30.544 15:46:28 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:30.544 15:46:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:30.544 15:46:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:30.544 15:46:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.544 ************************************ 00:08:30.544 START TEST nvmf_multitarget 00:08:30.544 ************************************ 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:30.544 * Looking for test storage... 00:08:30.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.544 15:46:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.545 15:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.142 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:37.143 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:37.143 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:37.143 Found net devices under 0000:af:00.0: cvl_0_0 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:37.143 Found net devices under 0000:af:00.1: cvl_0_1 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.143 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:08:37.402 00:08:37.402 --- 10.0.0.2 ping statistics --- 00:08:37.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.402 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:08:37.402 00:08:37.402 --- 10.0.0.1 ping statistics --- 00:08:37.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.402 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3618961 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3618961 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3618961 ']' 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:37.402 15:46:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:37.402 [2024-05-15 15:46:35.868241] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:08:37.402 [2024-05-15 15:46:35.868287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.402 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.402 [2024-05-15 15:46:35.942851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.660 [2024-05-15 15:46:36.018175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.660 [2024-05-15 15:46:36.018218] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.660 [2024-05-15 15:46:36.018228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.660 [2024-05-15 15:46:36.018237] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.660 [2024-05-15 15:46:36.018244] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.660 [2024-05-15 15:46:36.022211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.660 [2024-05-15 15:46:36.022231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.660 [2024-05-15 15:46:36.022315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.660 [2024-05-15 15:46:36.022317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:38.227 15:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:38.228 15:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:38.486 15:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:38.486 15:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:38.486 "nvmf_tgt_1" 00:08:38.486 15:46:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:38.486 "nvmf_tgt_2" 00:08:38.486 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:38.486 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:38.744 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:38.744 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:38.744 true 00:08:38.744 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:39.002 true 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.002 rmmod nvme_tcp 00:08:39.002 rmmod nvme_fabrics 00:08:39.002 rmmod nvme_keyring 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3618961 ']' 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3618961 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3618961 ']' 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3618961 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:39.002 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3618961 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3618961' 00:08:39.261 killing process with pid 3618961 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3618961 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3618961 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.261 15:46:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.800 15:46:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.800 00:08:41.800 real 0m11.244s 00:08:41.800 user 0m9.516s 00:08:41.800 sys 0m5.961s 00:08:41.800 15:46:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.800 15:46:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:41.800 ************************************ 00:08:41.800 END TEST nvmf_multitarget 00:08:41.800 ************************************ 00:08:41.800 15:46:39 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:41.800 15:46:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:41.800 15:46:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:41.800 15:46:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.800 ************************************ 00:08:41.800 START TEST nvmf_rpc 00:08:41.800 ************************************ 00:08:41.800 15:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:41.800 * Looking for test storage... 00:08:41.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.800 15:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:48.368 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:48.368 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.368 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:48.369 Found net devices under 0000:af:00.0: cvl_0_0 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:48.369 Found net devices under 0000:af:00.1: cvl_0_1 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.369 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.628 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.628 15:46:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:08:48.628 00:08:48.628 --- 10.0.0.2 ping statistics --- 00:08:48.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.628 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:48.628 00:08:48.628 --- 10.0.0.1 ping statistics --- 00:08:48.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.628 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3623118 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3623118 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3623118 ']' 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:48.628 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.628 [2024-05-15 15:46:47.125303] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:08:48.628 [2024-05-15 15:46:47.125348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.628 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.887 [2024-05-15 15:46:47.199272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.887 [2024-05-15 15:46:47.269508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.887 [2024-05-15 15:46:47.269554] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.887 [2024-05-15 15:46:47.269563] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.887 [2024-05-15 15:46:47.269571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.887 [2024-05-15 15:46:47.269578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.887 [2024-05-15 15:46:47.269673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.887 [2024-05-15 15:46:47.269767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.887 [2024-05-15 15:46:47.269833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.887 [2024-05-15 15:46:47.269834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.454 15:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.454 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.454 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:49.454 "tick_rate": 2500000000, 00:08:49.454 "poll_groups": [ 00:08:49.454 { 00:08:49.454 "name": "nvmf_tgt_poll_group_000", 00:08:49.454 "admin_qpairs": 0, 00:08:49.454 "io_qpairs": 0, 00:08:49.454 "current_admin_qpairs": 0, 00:08:49.454 "current_io_qpairs": 0, 00:08:49.454 "pending_bdev_io": 0, 00:08:49.454 "completed_nvme_io": 0, 00:08:49.454 "transports": [] 00:08:49.454 }, 00:08:49.454 { 00:08:49.454 "name": "nvmf_tgt_poll_group_001", 00:08:49.454 "admin_qpairs": 0, 00:08:49.454 "io_qpairs": 0, 00:08:49.454 "current_admin_qpairs": 0, 00:08:49.454 "current_io_qpairs": 0, 00:08:49.454 "pending_bdev_io": 0, 00:08:49.454 "completed_nvme_io": 0, 00:08:49.454 "transports": [] 00:08:49.454 }, 00:08:49.454 { 00:08:49.454 "name": "nvmf_tgt_poll_group_002", 00:08:49.454 "admin_qpairs": 0, 00:08:49.454 "io_qpairs": 0, 00:08:49.454 "current_admin_qpairs": 0, 00:08:49.454 "current_io_qpairs": 0, 00:08:49.454 "pending_bdev_io": 0, 00:08:49.454 "completed_nvme_io": 0, 00:08:49.454 "transports": [] 00:08:49.454 }, 00:08:49.454 { 00:08:49.454 "name": "nvmf_tgt_poll_group_003", 00:08:49.454 "admin_qpairs": 0, 00:08:49.454 "io_qpairs": 0, 00:08:49.454 "current_admin_qpairs": 0, 00:08:49.454 "current_io_qpairs": 0, 00:08:49.454 "pending_bdev_io": 0, 00:08:49.454 "completed_nvme_io": 0, 00:08:49.454 "transports": [] 00:08:49.454 } 00:08:49.454 ] 00:08:49.454 }' 00:08:49.454 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:49.454 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:49.454 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:49.454 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:49.737 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.738 [2024-05-15 15:46:48.096368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:49.738 "tick_rate": 2500000000, 00:08:49.738 "poll_groups": [ 00:08:49.738 { 00:08:49.738 "name": "nvmf_tgt_poll_group_000", 00:08:49.738 "admin_qpairs": 0, 00:08:49.738 "io_qpairs": 0, 00:08:49.738 "current_admin_qpairs": 0, 00:08:49.738 "current_io_qpairs": 0, 00:08:49.738 "pending_bdev_io": 0, 00:08:49.738 "completed_nvme_io": 0, 00:08:49.738 "transports": [ 00:08:49.738 { 00:08:49.738 "trtype": "TCP" 00:08:49.738 } 00:08:49.738 ] 00:08:49.738 }, 00:08:49.738 { 00:08:49.738 "name": "nvmf_tgt_poll_group_001", 00:08:49.738 "admin_qpairs": 0, 00:08:49.738 "io_qpairs": 0, 00:08:49.738 "current_admin_qpairs": 0, 00:08:49.738 "current_io_qpairs": 0, 00:08:49.738 "pending_bdev_io": 0, 00:08:49.738 "completed_nvme_io": 0, 00:08:49.738 "transports": [ 00:08:49.738 { 00:08:49.738 "trtype": "TCP" 00:08:49.738 } 00:08:49.738 ] 00:08:49.738 }, 00:08:49.738 { 00:08:49.738 "name": "nvmf_tgt_poll_group_002", 00:08:49.738 "admin_qpairs": 0, 00:08:49.738 "io_qpairs": 0, 00:08:49.738 "current_admin_qpairs": 0, 00:08:49.738 "current_io_qpairs": 0, 00:08:49.738 "pending_bdev_io": 0, 00:08:49.738 "completed_nvme_io": 0, 00:08:49.738 "transports": [ 00:08:49.738 { 00:08:49.738 "trtype": "TCP" 00:08:49.738 } 00:08:49.738 ] 00:08:49.738 }, 00:08:49.738 { 00:08:49.738 "name": "nvmf_tgt_poll_group_003", 00:08:49.738 "admin_qpairs": 0, 00:08:49.738 "io_qpairs": 0, 00:08:49.738 "current_admin_qpairs": 0, 00:08:49.738 "current_io_qpairs": 0, 00:08:49.738 "pending_bdev_io": 0, 00:08:49.738 "completed_nvme_io": 0, 00:08:49.738 "transports": [ 00:08:49.738 { 00:08:49.738 "trtype": "TCP" 00:08:49.738 } 00:08:49.738 ] 00:08:49.738 } 00:08:49.738 ] 00:08:49.738 }' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.738 Malloc1 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.738 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.738 [2024-05-15 15:46:48.275315] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:49.738 [2024-05-15 15:46:48.275648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:49.998 [2024-05-15 15:46:48.304338] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:49.998 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:49.998 could not add new controller: failed to write to nvme-fabrics device 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.998 15:46:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.376 15:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.376 15:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:51.376 15:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.376 15:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:51.376 15:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:53.282 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:53.282 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:53.282 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.282 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:53.282 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.282 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:53.282 15:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.541 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.542 [2024-05-15 15:46:51.944350] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:53.542 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:53.542 could not add new controller: failed to write to nvme-fabrics device 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.542 15:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.922 15:46:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.922 15:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:54.922 15:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.922 15:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:54.922 15:46:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:56.860 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:56.860 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:56.860 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.860 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:56.860 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.860 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:56.860 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.119 [2024-05-15 15:46:55.515592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.119 15:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.120 15:46:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.499 15:46:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.499 15:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:58.499 15:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.499 15:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:58.499 15:46:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:00.403 15:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:00.403 15:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:00.403 15:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.403 15:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:00.403 15:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.403 15:46:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:00.403 15:46:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.663 [2024-05-15 15:46:59.076701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.663 15:46:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.042 15:47:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.042 15:47:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:02.042 15:47:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.042 15:47:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:02.042 15:47:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:03.946 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.206 [2024-05-15 15:47:02.568344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.206 15:47:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.582 15:47:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.582 15:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:05.582 15:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.582 15:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:05.582 15:47:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:07.487 15:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:07.487 15:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:07.487 15:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.487 15:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:07.487 15:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.487 15:47:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:07.487 15:47:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.487 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.487 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:07.487 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:07.487 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.487 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:07.487 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.745 [2024-05-15 15:47:06.087093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.745 15:47:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.122 15:47:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.122 15:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:09.122 15:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.123 15:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:09.123 15:47:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:11.027 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.028 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:11.028 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.286 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.286 [2024-05-15 15:47:09.635189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.287 15:47:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.664 15:47:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.664 15:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:12.664 15:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.664 15:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:12.664 15:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.607 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 [2024-05-15 15:47:13.204556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 [2024-05-15 15:47:13.252670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.867 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 [2024-05-15 15:47:13.304812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 [2024-05-15 15:47:13.352961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 [2024-05-15 15:47:13.401137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.868 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.127 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:15.128 "tick_rate": 2500000000, 00:09:15.128 "poll_groups": [ 00:09:15.128 { 00:09:15.128 "name": "nvmf_tgt_poll_group_000", 00:09:15.128 "admin_qpairs": 2, 00:09:15.128 "io_qpairs": 196, 00:09:15.128 "current_admin_qpairs": 0, 00:09:15.128 "current_io_qpairs": 0, 00:09:15.128 "pending_bdev_io": 0, 00:09:15.128 "completed_nvme_io": 296, 00:09:15.128 "transports": [ 00:09:15.128 { 00:09:15.128 "trtype": "TCP" 00:09:15.128 } 00:09:15.128 ] 00:09:15.128 }, 00:09:15.128 { 00:09:15.128 "name": "nvmf_tgt_poll_group_001", 00:09:15.128 "admin_qpairs": 2, 00:09:15.128 "io_qpairs": 196, 00:09:15.128 "current_admin_qpairs": 0, 00:09:15.128 "current_io_qpairs": 0, 00:09:15.128 "pending_bdev_io": 0, 00:09:15.128 "completed_nvme_io": 301, 00:09:15.128 "transports": [ 00:09:15.128 { 00:09:15.128 "trtype": "TCP" 00:09:15.128 } 00:09:15.128 ] 00:09:15.128 }, 00:09:15.128 { 00:09:15.128 "name": "nvmf_tgt_poll_group_002", 00:09:15.128 "admin_qpairs": 1, 00:09:15.128 "io_qpairs": 196, 00:09:15.128 "current_admin_qpairs": 0, 00:09:15.128 "current_io_qpairs": 0, 00:09:15.128 "pending_bdev_io": 0, 00:09:15.128 "completed_nvme_io": 246, 00:09:15.128 "transports": [ 00:09:15.128 { 00:09:15.128 "trtype": "TCP" 00:09:15.128 } 00:09:15.128 ] 00:09:15.128 }, 00:09:15.128 { 00:09:15.128 "name": "nvmf_tgt_poll_group_003", 00:09:15.128 "admin_qpairs": 2, 00:09:15.128 "io_qpairs": 196, 00:09:15.128 "current_admin_qpairs": 0, 00:09:15.128 "current_io_qpairs": 0, 00:09:15.128 "pending_bdev_io": 0, 00:09:15.128 "completed_nvme_io": 291, 00:09:15.128 "transports": [ 00:09:15.128 { 00:09:15.128 "trtype": "TCP" 00:09:15.128 } 00:09:15.128 ] 00:09:15.128 } 00:09:15.128 ] 00:09:15.128 }' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.128 rmmod nvme_tcp 00:09:15.128 rmmod nvme_fabrics 00:09:15.128 rmmod nvme_keyring 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3623118 ']' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3623118 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3623118 ']' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3623118 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3623118 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3623118' 00:09:15.128 killing process with pid 3623118 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3623118 00:09:15.128 [2024-05-15 15:47:13.692796] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:15.128 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3623118 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.386 15:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.922 15:47:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.922 00:09:17.922 real 0m36.062s 00:09:17.922 user 1m47.168s 00:09:17.922 sys 0m8.403s 00:09:17.922 15:47:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:17.922 15:47:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.922 ************************************ 00:09:17.922 END TEST nvmf_rpc 00:09:17.922 ************************************ 00:09:17.922 15:47:16 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:17.922 15:47:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:17.922 15:47:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.922 15:47:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:17.922 ************************************ 00:09:17.922 START TEST nvmf_invalid 00:09:17.922 ************************************ 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:17.922 * Looking for test storage... 00:09:17.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.922 15:47:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.923 15:47:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:24.493 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:24.493 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.493 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:24.494 Found net devices under 0000:af:00.0: cvl_0_0 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:24.494 Found net devices under 0000:af:00.1: cvl_0_1 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:09:24.494 00:09:24.494 --- 10.0.0.2 ping statistics --- 00:09:24.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.494 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:09:24.494 00:09:24.494 --- 10.0.0.1 ping statistics --- 00:09:24.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.494 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3631369 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3631369 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3631369 ']' 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:24.494 15:47:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.494 [2024-05-15 15:47:22.678546] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:09:24.494 [2024-05-15 15:47:22.678592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.494 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.494 [2024-05-15 15:47:22.753004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.494 [2024-05-15 15:47:22.828328] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.494 [2024-05-15 15:47:22.828364] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.494 [2024-05-15 15:47:22.828374] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.494 [2024-05-15 15:47:22.828382] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.494 [2024-05-15 15:47:22.828390] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.494 [2024-05-15 15:47:22.828436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.494 [2024-05-15 15:47:22.828453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.494 [2024-05-15 15:47:22.828537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.494 [2024-05-15 15:47:22.828539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:25.063 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7759 00:09:25.322 [2024-05-15 15:47:23.684445] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:25.322 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:25.322 { 00:09:25.322 "nqn": "nqn.2016-06.io.spdk:cnode7759", 00:09:25.322 "tgt_name": "foobar", 00:09:25.322 "method": "nvmf_create_subsystem", 00:09:25.322 "req_id": 1 00:09:25.322 } 00:09:25.322 Got JSON-RPC error response 00:09:25.322 response: 00:09:25.322 { 00:09:25.322 "code": -32603, 00:09:25.322 "message": "Unable to find target foobar" 00:09:25.322 }' 00:09:25.322 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:25.322 { 00:09:25.322 "nqn": "nqn.2016-06.io.spdk:cnode7759", 00:09:25.322 "tgt_name": "foobar", 00:09:25.322 "method": "nvmf_create_subsystem", 00:09:25.322 "req_id": 1 00:09:25.322 } 00:09:25.322 Got JSON-RPC error response 00:09:25.322 response: 00:09:25.322 { 00:09:25.322 "code": -32603, 00:09:25.322 "message": "Unable to find target foobar" 00:09:25.322 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:25.323 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:25.323 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17807 00:09:25.323 [2024-05-15 15:47:23.869111] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17807: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:25.582 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:25.582 { 00:09:25.582 "nqn": "nqn.2016-06.io.spdk:cnode17807", 00:09:25.582 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:25.582 "method": "nvmf_create_subsystem", 00:09:25.582 "req_id": 1 00:09:25.582 } 00:09:25.582 Got JSON-RPC error response 00:09:25.582 response: 00:09:25.582 { 00:09:25.582 "code": -32602, 00:09:25.582 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:25.582 }' 00:09:25.582 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:25.582 { 00:09:25.582 "nqn": "nqn.2016-06.io.spdk:cnode17807", 00:09:25.582 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:25.582 "method": "nvmf_create_subsystem", 00:09:25.582 "req_id": 1 00:09:25.582 } 00:09:25.582 Got JSON-RPC error response 00:09:25.582 response: 00:09:25.582 { 00:09:25.582 "code": -32602, 00:09:25.582 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:25.582 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:25.582 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:25.582 15:47:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30355 00:09:25.582 [2024-05-15 15:47:24.057707] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30355: invalid model number 'SPDK_Controller' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:25.582 { 00:09:25.582 "nqn": "nqn.2016-06.io.spdk:cnode30355", 00:09:25.582 "model_number": "SPDK_Controller\u001f", 00:09:25.582 "method": "nvmf_create_subsystem", 00:09:25.582 "req_id": 1 00:09:25.582 } 00:09:25.582 Got JSON-RPC error response 00:09:25.582 response: 00:09:25.582 { 00:09:25.582 "code": -32602, 00:09:25.582 "message": "Invalid MN SPDK_Controller\u001f" 00:09:25.582 }' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:25.582 { 00:09:25.582 "nqn": "nqn.2016-06.io.spdk:cnode30355", 00:09:25.582 "model_number": "SPDK_Controller\u001f", 00:09:25.582 "method": "nvmf_create_subsystem", 00:09:25.582 "req_id": 1 00:09:25.582 } 00:09:25.582 Got JSON-RPC error response 00:09:25.582 response: 00:09:25.582 { 00:09:25.582 "code": -32602, 00:09:25.582 "message": "Invalid MN SPDK_Controller\u001f" 00:09:25.582 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.582 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.841 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:09:25.842 15:47:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'IHo~*HfC"_q4BSvx /dev/null' 00:09:28.437 15:47:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.968 15:47:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.968 00:09:30.968 real 0m12.842s 00:09:30.968 user 0m19.902s 00:09:30.968 sys 0m6.078s 00:09:30.968 15:47:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:30.968 15:47:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:30.968 ************************************ 00:09:30.968 END TEST nvmf_invalid 00:09:30.968 ************************************ 00:09:30.968 15:47:28 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:30.968 15:47:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:30.968 15:47:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:30.968 15:47:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.968 ************************************ 00:09:30.968 START TEST nvmf_abort 00:09:30.968 ************************************ 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:30.968 * Looking for test storage... 00:09:30.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.968 15:47:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.969 15:47:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.580 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:37.581 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:37.581 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:37.581 Found net devices under 0000:af:00.0: cvl_0_0 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:37.581 Found net devices under 0000:af:00.1: cvl_0_1 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:37.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:09:37.581 00:09:37.581 --- 10.0.0.2 ping statistics --- 00:09:37.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.581 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:09:37.581 00:09:37.581 --- 10.0.0.1 ping statistics --- 00:09:37.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.581 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3636029 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3636029 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3636029 ']' 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:37.581 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.582 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:37.582 15:47:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:37.582 [2024-05-15 15:47:35.791441] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:09:37.582 [2024-05-15 15:47:35.791486] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.582 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.582 [2024-05-15 15:47:35.864090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.582 [2024-05-15 15:47:35.939315] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.582 [2024-05-15 15:47:35.939350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.582 [2024-05-15 15:47:35.939360] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.582 [2024-05-15 15:47:35.939369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.582 [2024-05-15 15:47:35.939376] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.582 [2024-05-15 15:47:35.939473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.582 [2024-05-15 15:47:35.939569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.582 [2024-05-15 15:47:35.939571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.150 [2024-05-15 15:47:36.663824] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.150 Malloc0 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.150 Delay0 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.150 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.409 [2024-05-15 15:47:36.736319] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:38.409 [2024-05-15 15:47:36.736564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.409 15:47:36 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:38.409 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.409 [2024-05-15 15:47:36.855682] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:40.947 Initializing NVMe Controllers 00:09:40.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:40.947 controller IO queue size 128 less than required 00:09:40.947 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:40.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:40.947 Initialization complete. Launching workers. 00:09:40.947 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 42157 00:09:40.947 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42221, failed to submit 62 00:09:40.947 success 42161, unsuccess 60, failed 0 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.947 rmmod nvme_tcp 00:09:40.947 rmmod nvme_fabrics 00:09:40.947 rmmod nvme_keyring 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:40.947 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:40.948 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3636029 ']' 00:09:40.948 15:47:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3636029 00:09:40.948 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3636029 ']' 00:09:40.948 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3636029 00:09:40.948 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:09:40.948 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:40.948 15:47:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3636029 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3636029' 00:09:40.948 killing process with pid 3636029 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3636029 00:09:40.948 [2024-05-15 15:47:39.049087] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3636029 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.948 15:47:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.855 15:47:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:42.855 00:09:42.855 real 0m12.316s 00:09:42.855 user 0m13.162s 00:09:42.855 sys 0m6.255s 00:09:42.855 15:47:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:42.855 15:47:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.855 ************************************ 00:09:42.855 END TEST nvmf_abort 00:09:42.855 ************************************ 00:09:42.855 15:47:41 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:42.855 15:47:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:42.855 15:47:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:42.855 15:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.114 ************************************ 00:09:43.114 START TEST nvmf_ns_hotplug_stress 00:09:43.115 ************************************ 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:43.115 * Looking for test storage... 00:09:43.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:43.115 15:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:49.691 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:49.691 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:49.691 Found net devices under 0000:af:00.0: cvl_0_0 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:49.691 Found net devices under 0000:af:00.1: cvl_0_1 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.691 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.692 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:09:49.950 00:09:49.950 --- 10.0.0.2 ping statistics --- 00:09:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.950 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:09:49.950 00:09:49.950 --- 10.0.0.1 ping statistics --- 00:09:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.950 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.950 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.951 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:49.951 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.951 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:49.951 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3640274 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3640274 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3640274 ']' 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:50.209 15:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.209 [2024-05-15 15:47:48.573655] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:09:50.209 [2024-05-15 15:47:48.573704] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.209 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.209 [2024-05-15 15:47:48.647998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.209 [2024-05-15 15:47:48.721362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.209 [2024-05-15 15:47:48.721397] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.209 [2024-05-15 15:47:48.721406] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.209 [2024-05-15 15:47:48.721415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.209 [2024-05-15 15:47:48.721438] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.209 [2024-05-15 15:47:48.721537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.209 [2024-05-15 15:47:48.721629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.209 [2024-05-15 15:47:48.721631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.146 [2024-05-15 15:47:49.578363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.146 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.404 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.404 [2024-05-15 15:47:49.943959] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:51.404 [2024-05-15 15:47:49.944224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.662 15:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.662 15:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:51.920 Malloc0 00:09:51.920 15:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.178 Delay0 00:09:52.178 15:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.178 15:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:52.437 NULL1 00:09:52.437 15:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:52.695 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3640837 00:09:52.695 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:52.695 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:52.695 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.695 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.954 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.954 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:52.954 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:53.212 true 00:09:53.212 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:53.212 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.471 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.471 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:53.471 15:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:53.730 true 00:09:53.730 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:53.730 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.013 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.013 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:54.013 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:54.296 true 00:09:54.296 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:54.296 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.555 15:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.555 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:54.555 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:54.814 true 00:09:54.814 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:54.814 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.072 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.073 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:55.073 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:55.331 true 00:09:55.331 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:55.331 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.589 15:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.847 15:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:55.847 15:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:55.847 true 00:09:55.847 15:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:55.847 15:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.222 Read completed with error (sct=0, sc=11) 00:09:57.222 15:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.222 15:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:57.222 15:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:57.481 true 00:09:57.481 15:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:57.481 15:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.419 15:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.419 15:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:58.419 15:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:58.678 true 00:09:58.678 15:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:58.678 15:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.937 15:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.937 15:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:58.937 15:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:59.197 true 00:09:59.197 15:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:09:59.197 15:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.576 15:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.576 15:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:00.576 15:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:00.835 true 00:10:00.835 15:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:00.835 15:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.774 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.774 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:01.774 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:02.033 true 00:10:02.033 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:02.033 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.033 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.292 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:02.292 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:02.552 true 00:10:02.552 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:02.552 15:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.928 15:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.928 15:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:03.928 15:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:03.928 true 00:10:04.187 15:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:04.187 15:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.755 15:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.016 15:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:05.016 15:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:05.277 true 00:10:05.277 15:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:05.277 15:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.536 15:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.536 15:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:05.536 15:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:05.795 true 00:10:05.795 15:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:05.795 15:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.174 15:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.174 15:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:07.174 15:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:07.433 true 00:10:07.433 15:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:07.434 15:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.372 15:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.372 15:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:08.372 15:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:08.372 true 00:10:08.664 15:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:08.664 15:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.664 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.923 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:08.923 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:08.923 true 00:10:08.923 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:08.923 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.182 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.442 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:09.442 15:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:09.701 true 00:10:09.701 15:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:09.701 15:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.701 15:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.960 15:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:09.960 15:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:10.219 true 00:10:10.219 15:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:10.219 15:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.598 15:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.598 15:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:11.598 15:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:11.598 true 00:10:11.598 15:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:11.598 15:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.536 15:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.795 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:12.795 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:12.795 true 00:10:12.795 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:12.795 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.055 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.314 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:13.314 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:13.314 true 00:10:13.314 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:13.314 15:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.693 15:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.693 15:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:14.693 15:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:14.952 true 00:10:14.952 15:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:14.952 15:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.889 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.889 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:15.889 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:16.149 true 00:10:16.149 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:16.149 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.408 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.408 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:16.408 15:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:16.668 true 00:10:16.668 15:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:16.668 15:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.046 15:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.046 15:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:18.046 15:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:18.306 true 00:10:18.306 15:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:18.306 15:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.244 15:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.244 15:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:19.244 15:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:19.504 true 00:10:19.504 15:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:19.504 15:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.504 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.763 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:19.763 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:20.022 true 00:10:20.022 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:20.022 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.022 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.282 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:20.282 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:20.541 true 00:10:20.541 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:20.541 15:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.479 15:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.479 15:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:21.479 15:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:21.738 true 00:10:21.738 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:21.738 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.998 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.998 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:21.998 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:22.257 true 00:10:22.257 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:22.257 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.516 15:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.516 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:22.516 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:22.777 true 00:10:22.777 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:22.777 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.058 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.058 Initializing NVMe Controllers 00:10:23.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.058 Controller IO queue size 128, less than required. 00:10:23.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.058 Controller IO queue size 128, less than required. 00:10:23.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:23.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:23.058 Initialization complete. Launching workers. 00:10:23.058 ======================================================== 00:10:23.058 Latency(us) 00:10:23.058 Device Information : IOPS MiB/s Average min max 00:10:23.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1584.40 0.77 46257.79 1653.20 1104389.34 00:10:23.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15094.84 7.37 8458.84 2020.13 290792.60 00:10:23.058 ======================================================== 00:10:23.058 Total : 16679.24 8.14 12049.44 1653.20 1104389.34 00:10:23.058 00:10:23.058 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:23.058 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:23.323 true 00:10:23.323 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3640837 00:10:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3640837) - No such process 00:10:23.323 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3640837 00:10:23.323 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.583 15:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:23.842 null0 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.842 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:24.101 null1 00:10:24.101 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.101 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.101 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:24.101 null2 00:10:24.361 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.361 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.361 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:24.361 null3 00:10:24.361 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.361 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.361 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:24.621 null4 00:10:24.621 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.621 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.621 15:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:24.621 null5 00:10:24.621 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.621 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.621 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:24.880 null6 00:10:24.880 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.880 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.880 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:25.139 null7 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.139 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3647020 3647022 3647026 3647030 3647032 3647034 3647036 3647039 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.140 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.399 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.400 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.400 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.400 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.400 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:25.400 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.400 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.400 15:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:25.659 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.918 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.185 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.444 15:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.704 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.704 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.704 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.704 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.704 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.704 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.704 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.705 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.964 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.223 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.224 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.483 15:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.742 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.742 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.742 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.742 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.743 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.002 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.003 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.262 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.521 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.521 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.521 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.522 15:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.522 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.522 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.522 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.522 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.522 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.522 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:28.781 rmmod nvme_tcp 00:10:28.781 rmmod nvme_fabrics 00:10:28.781 rmmod nvme_keyring 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3640274 ']' 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3640274 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3640274 ']' 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3640274 00:10:28.781 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:10:29.040 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:29.040 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3640274 00:10:29.040 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:29.040 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:29.040 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3640274' 00:10:29.040 killing process with pid 3640274 00:10:29.040 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3640274 00:10:29.040 [2024-05-15 15:48:27.397634] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:29.040 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3640274 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.299 15:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.202 15:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.202 00:10:31.202 real 0m48.240s 00:10:31.202 user 3m6.550s 00:10:31.202 sys 0m20.439s 00:10:31.202 15:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:31.202 15:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.202 ************************************ 00:10:31.202 END TEST nvmf_ns_hotplug_stress 00:10:31.202 ************************************ 00:10:31.202 15:48:29 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:31.202 15:48:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:31.202 15:48:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:31.202 15:48:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.462 ************************************ 00:10:31.462 START TEST nvmf_connect_stress 00:10:31.462 ************************************ 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:31.462 * Looking for test storage... 00:10:31.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.462 15:48:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.463 15:48:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:39.608 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:39.608 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:39.608 Found net devices under 0000:af:00.0: cvl_0_0 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:39.608 Found net devices under 0000:af:00.1: cvl_0_1 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.608 15:48:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:10:39.608 00:10:39.608 --- 10.0.0.2 ping statistics --- 00:10:39.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.608 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:10:39.609 00:10:39.609 --- 10.0.0.1 ping statistics --- 00:10:39.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.609 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3651529 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3651529 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3651529 ']' 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 [2024-05-15 15:48:37.114210] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:10:39.609 [2024-05-15 15:48:37.114257] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.609 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.609 [2024-05-15 15:48:37.188455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.609 [2024-05-15 15:48:37.260088] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.609 [2024-05-15 15:48:37.260128] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.609 [2024-05-15 15:48:37.260137] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.609 [2024-05-15 15:48:37.260146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.609 [2024-05-15 15:48:37.260154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.609 [2024-05-15 15:48:37.260253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.609 [2024-05-15 15:48:37.260337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.609 [2024-05-15 15:48:37.260339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 [2024-05-15 15:48:37.960749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 [2024-05-15 15:48:37.981275] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:39.609 [2024-05-15 15:48:37.993346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.609 15:48:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 NULL1 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3651756 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.609 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.610 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.870 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.870 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:39.870 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.870 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.870 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.439 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.439 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:40.439 15:48:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.439 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.439 15:48:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.699 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.699 15:48:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:40.699 15:48:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.699 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.699 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.959 15:48:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:40.959 15:48:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.959 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.959 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.225 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.225 15:48:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:41.225 15:48:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.225 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.225 15:48:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.546 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.546 15:48:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:41.546 15:48:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.546 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.546 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.806 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.806 15:48:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:41.806 15:48:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.806 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.806 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.375 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.375 15:48:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:42.375 15:48:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.375 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.375 15:48:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.635 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.635 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:42.635 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.635 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.635 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.894 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.894 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:42.894 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.894 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.894 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.154 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.154 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:43.154 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.154 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.154 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.414 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.414 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:43.414 15:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.414 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.414 15:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.983 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.983 15:48:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:43.983 15:48:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.983 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.983 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.242 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.242 15:48:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:44.242 15:48:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.242 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.242 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.502 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.502 15:48:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:44.502 15:48:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.502 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.502 15:48:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.761 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.761 15:48:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:44.761 15:48:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.761 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.761 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.019 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.278 15:48:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:45.278 15:48:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.278 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.278 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.536 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.536 15:48:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:45.536 15:48:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.536 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.536 15:48:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.794 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.794 15:48:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:45.794 15:48:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.794 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.794 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.053 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.053 15:48:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:46.053 15:48:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.053 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.053 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.622 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.622 15:48:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:46.622 15:48:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.622 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.622 15:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.880 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.880 15:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:46.880 15:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.880 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.880 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.139 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.139 15:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:47.139 15:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.139 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.139 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.398 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.398 15:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:47.398 15:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.398 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.398 15:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.658 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.658 15:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:47.658 15:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.658 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.658 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.226 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.226 15:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:48.226 15:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.226 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.226 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.486 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.486 15:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:48.486 15:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.486 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.486 15:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.744 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.745 15:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:48.745 15:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.745 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.745 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.004 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.004 15:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:49.004 15:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.004 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.004 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.263 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.263 15:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:49.263 15:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.263 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.263 15:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.832 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.832 15:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:49.832 15:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.832 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.832 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.832 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.091 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3651756 00:10:50.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3651756) - No such process 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3651756 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.092 rmmod nvme_tcp 00:10:50.092 rmmod nvme_fabrics 00:10:50.092 rmmod nvme_keyring 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3651529 ']' 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3651529 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3651529 ']' 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3651529 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3651529 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3651529' 00:10:50.092 killing process with pid 3651529 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3651529 00:10:50.092 [2024-05-15 15:48:48.552804] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:50.092 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3651529 00:10:50.351 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.351 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:50.352 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:50.352 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.352 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.352 15:48:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.352 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.352 15:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.356 15:48:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:52.356 00:10:52.356 real 0m21.050s 00:10:52.356 user 0m41.103s 00:10:52.356 sys 0m10.417s 00:10:52.356 15:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:52.356 15:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.356 ************************************ 00:10:52.356 END TEST nvmf_connect_stress 00:10:52.356 ************************************ 00:10:52.356 15:48:50 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:52.356 15:48:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:52.356 15:48:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:52.356 15:48:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:52.620 ************************************ 00:10:52.620 START TEST nvmf_fused_ordering 00:10:52.620 ************************************ 00:10:52.620 15:48:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:52.620 * Looking for test storage... 00:10:52.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.620 15:48:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:59.191 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:59.192 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:59.192 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:59.192 Found net devices under 0000:af:00.0: cvl_0_0 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:59.192 Found net devices under 0000:af:00.1: cvl_0_1 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:59.192 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:59.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:10:59.451 00:10:59.451 --- 10.0.0.2 ping statistics --- 00:10:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.451 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:10:59.451 00:10:59.451 --- 10.0.0.1 ping statistics --- 00:10:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.451 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3657326 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3657326 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3657326 ']' 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:59.451 15:48:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:59.451 [2024-05-15 15:48:57.879116] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:10:59.451 [2024-05-15 15:48:57.879159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.451 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.451 [2024-05-15 15:48:57.950576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.710 [2024-05-15 15:48:58.023350] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.710 [2024-05-15 15:48:58.023381] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.710 [2024-05-15 15:48:58.023390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.710 [2024-05-15 15:48:58.023398] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.710 [2024-05-15 15:48:58.023405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.710 [2024-05-15 15:48:58.023425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 [2024-05-15 15:48:58.708037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 [2024-05-15 15:48:58.724026] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:00.278 [2024-05-15 15:48:58.724225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 NULL1 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.278 15:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:00.278 [2024-05-15 15:48:58.779171] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:11:00.278 [2024-05-15 15:48:58.779214] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657362 ] 00:11:00.278 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.215 Attached to nqn.2016-06.io.spdk:cnode1 00:11:01.215 Namespace ID: 1 size: 1GB 00:11:01.215 fused_ordering(0) 00:11:01.215 fused_ordering(1) 00:11:01.215 fused_ordering(2) 00:11:01.215 fused_ordering(3) 00:11:01.215 fused_ordering(4) 00:11:01.215 fused_ordering(5) 00:11:01.215 fused_ordering(6) 00:11:01.215 fused_ordering(7) 00:11:01.215 fused_ordering(8) 00:11:01.215 fused_ordering(9) 00:11:01.215 fused_ordering(10) 00:11:01.215 fused_ordering(11) 00:11:01.215 fused_ordering(12) 00:11:01.215 fused_ordering(13) 00:11:01.215 fused_ordering(14) 00:11:01.215 fused_ordering(15) 00:11:01.215 fused_ordering(16) 00:11:01.215 fused_ordering(17) 00:11:01.215 fused_ordering(18) 00:11:01.215 fused_ordering(19) 00:11:01.215 fused_ordering(20) 00:11:01.215 fused_ordering(21) 00:11:01.215 fused_ordering(22) 00:11:01.215 fused_ordering(23) 00:11:01.215 fused_ordering(24) 00:11:01.215 fused_ordering(25) 00:11:01.215 fused_ordering(26) 00:11:01.215 fused_ordering(27) 00:11:01.215 fused_ordering(28) 00:11:01.215 fused_ordering(29) 00:11:01.215 fused_ordering(30) 00:11:01.215 fused_ordering(31) 00:11:01.215 fused_ordering(32) 00:11:01.215 fused_ordering(33) 00:11:01.215 fused_ordering(34) 00:11:01.215 fused_ordering(35) 00:11:01.215 fused_ordering(36) 00:11:01.215 fused_ordering(37) 00:11:01.215 fused_ordering(38) 00:11:01.215 fused_ordering(39) 00:11:01.215 fused_ordering(40) 00:11:01.215 fused_ordering(41) 00:11:01.215 fused_ordering(42) 00:11:01.215 fused_ordering(43) 00:11:01.215 fused_ordering(44) 00:11:01.215 fused_ordering(45) 00:11:01.215 fused_ordering(46) 00:11:01.215 fused_ordering(47) 00:11:01.215 fused_ordering(48) 00:11:01.215 fused_ordering(49) 00:11:01.215 fused_ordering(50) 00:11:01.215 fused_ordering(51) 00:11:01.215 fused_ordering(52) 00:11:01.215 fused_ordering(53) 00:11:01.215 fused_ordering(54) 00:11:01.215 fused_ordering(55) 00:11:01.215 fused_ordering(56) 00:11:01.215 fused_ordering(57) 00:11:01.215 fused_ordering(58) 00:11:01.215 fused_ordering(59) 00:11:01.215 fused_ordering(60) 00:11:01.215 fused_ordering(61) 00:11:01.215 fused_ordering(62) 00:11:01.215 fused_ordering(63) 00:11:01.215 fused_ordering(64) 00:11:01.215 fused_ordering(65) 00:11:01.215 fused_ordering(66) 00:11:01.215 fused_ordering(67) 00:11:01.215 fused_ordering(68) 00:11:01.215 fused_ordering(69) 00:11:01.215 fused_ordering(70) 00:11:01.215 fused_ordering(71) 00:11:01.215 fused_ordering(72) 00:11:01.215 fused_ordering(73) 00:11:01.215 fused_ordering(74) 00:11:01.215 fused_ordering(75) 00:11:01.215 fused_ordering(76) 00:11:01.215 fused_ordering(77) 00:11:01.215 fused_ordering(78) 00:11:01.215 fused_ordering(79) 00:11:01.215 fused_ordering(80) 00:11:01.215 fused_ordering(81) 00:11:01.215 fused_ordering(82) 00:11:01.215 fused_ordering(83) 00:11:01.215 fused_ordering(84) 00:11:01.215 fused_ordering(85) 00:11:01.215 fused_ordering(86) 00:11:01.215 fused_ordering(87) 00:11:01.215 fused_ordering(88) 00:11:01.215 fused_ordering(89) 00:11:01.216 fused_ordering(90) 00:11:01.216 fused_ordering(91) 00:11:01.216 fused_ordering(92) 00:11:01.216 fused_ordering(93) 00:11:01.216 fused_ordering(94) 00:11:01.216 fused_ordering(95) 00:11:01.216 fused_ordering(96) 00:11:01.216 fused_ordering(97) 00:11:01.216 fused_ordering(98) 00:11:01.216 fused_ordering(99) 00:11:01.216 fused_ordering(100) 00:11:01.216 fused_ordering(101) 00:11:01.216 fused_ordering(102) 00:11:01.216 fused_ordering(103) 00:11:01.216 fused_ordering(104) 00:11:01.216 fused_ordering(105) 00:11:01.216 fused_ordering(106) 00:11:01.216 fused_ordering(107) 00:11:01.216 fused_ordering(108) 00:11:01.216 fused_ordering(109) 00:11:01.216 fused_ordering(110) 00:11:01.216 fused_ordering(111) 00:11:01.216 fused_ordering(112) 00:11:01.216 fused_ordering(113) 00:11:01.216 fused_ordering(114) 00:11:01.216 fused_ordering(115) 00:11:01.216 fused_ordering(116) 00:11:01.216 fused_ordering(117) 00:11:01.216 fused_ordering(118) 00:11:01.216 fused_ordering(119) 00:11:01.216 fused_ordering(120) 00:11:01.216 fused_ordering(121) 00:11:01.216 fused_ordering(122) 00:11:01.216 fused_ordering(123) 00:11:01.216 fused_ordering(124) 00:11:01.216 fused_ordering(125) 00:11:01.216 fused_ordering(126) 00:11:01.216 fused_ordering(127) 00:11:01.216 fused_ordering(128) 00:11:01.216 fused_ordering(129) 00:11:01.216 fused_ordering(130) 00:11:01.216 fused_ordering(131) 00:11:01.216 fused_ordering(132) 00:11:01.216 fused_ordering(133) 00:11:01.216 fused_ordering(134) 00:11:01.216 fused_ordering(135) 00:11:01.216 fused_ordering(136) 00:11:01.216 fused_ordering(137) 00:11:01.216 fused_ordering(138) 00:11:01.216 fused_ordering(139) 00:11:01.216 fused_ordering(140) 00:11:01.216 fused_ordering(141) 00:11:01.216 fused_ordering(142) 00:11:01.216 fused_ordering(143) 00:11:01.216 fused_ordering(144) 00:11:01.216 fused_ordering(145) 00:11:01.216 fused_ordering(146) 00:11:01.216 fused_ordering(147) 00:11:01.216 fused_ordering(148) 00:11:01.216 fused_ordering(149) 00:11:01.216 fused_ordering(150) 00:11:01.216 fused_ordering(151) 00:11:01.216 fused_ordering(152) 00:11:01.216 fused_ordering(153) 00:11:01.216 fused_ordering(154) 00:11:01.216 fused_ordering(155) 00:11:01.216 fused_ordering(156) 00:11:01.216 fused_ordering(157) 00:11:01.216 fused_ordering(158) 00:11:01.216 fused_ordering(159) 00:11:01.216 fused_ordering(160) 00:11:01.216 fused_ordering(161) 00:11:01.216 fused_ordering(162) 00:11:01.216 fused_ordering(163) 00:11:01.216 fused_ordering(164) 00:11:01.216 fused_ordering(165) 00:11:01.216 fused_ordering(166) 00:11:01.216 fused_ordering(167) 00:11:01.216 fused_ordering(168) 00:11:01.216 fused_ordering(169) 00:11:01.216 fused_ordering(170) 00:11:01.216 fused_ordering(171) 00:11:01.216 fused_ordering(172) 00:11:01.216 fused_ordering(173) 00:11:01.216 fused_ordering(174) 00:11:01.216 fused_ordering(175) 00:11:01.216 fused_ordering(176) 00:11:01.216 fused_ordering(177) 00:11:01.216 fused_ordering(178) 00:11:01.216 fused_ordering(179) 00:11:01.216 fused_ordering(180) 00:11:01.216 fused_ordering(181) 00:11:01.216 fused_ordering(182) 00:11:01.216 fused_ordering(183) 00:11:01.216 fused_ordering(184) 00:11:01.216 fused_ordering(185) 00:11:01.216 fused_ordering(186) 00:11:01.216 fused_ordering(187) 00:11:01.216 fused_ordering(188) 00:11:01.216 fused_ordering(189) 00:11:01.216 fused_ordering(190) 00:11:01.216 fused_ordering(191) 00:11:01.216 fused_ordering(192) 00:11:01.216 fused_ordering(193) 00:11:01.216 fused_ordering(194) 00:11:01.216 fused_ordering(195) 00:11:01.216 fused_ordering(196) 00:11:01.216 fused_ordering(197) 00:11:01.216 fused_ordering(198) 00:11:01.216 fused_ordering(199) 00:11:01.216 fused_ordering(200) 00:11:01.216 fused_ordering(201) 00:11:01.216 fused_ordering(202) 00:11:01.216 fused_ordering(203) 00:11:01.216 fused_ordering(204) 00:11:01.216 fused_ordering(205) 00:11:01.784 fused_ordering(206) 00:11:01.784 fused_ordering(207) 00:11:01.784 fused_ordering(208) 00:11:01.784 fused_ordering(209) 00:11:01.784 fused_ordering(210) 00:11:01.784 fused_ordering(211) 00:11:01.784 fused_ordering(212) 00:11:01.784 fused_ordering(213) 00:11:01.784 fused_ordering(214) 00:11:01.784 fused_ordering(215) 00:11:01.784 fused_ordering(216) 00:11:01.784 fused_ordering(217) 00:11:01.784 fused_ordering(218) 00:11:01.784 fused_ordering(219) 00:11:01.784 fused_ordering(220) 00:11:01.784 fused_ordering(221) 00:11:01.784 fused_ordering(222) 00:11:01.784 fused_ordering(223) 00:11:01.784 fused_ordering(224) 00:11:01.784 fused_ordering(225) 00:11:01.784 fused_ordering(226) 00:11:01.784 fused_ordering(227) 00:11:01.784 fused_ordering(228) 00:11:01.784 fused_ordering(229) 00:11:01.784 fused_ordering(230) 00:11:01.784 fused_ordering(231) 00:11:01.784 fused_ordering(232) 00:11:01.784 fused_ordering(233) 00:11:01.784 fused_ordering(234) 00:11:01.784 fused_ordering(235) 00:11:01.784 fused_ordering(236) 00:11:01.784 fused_ordering(237) 00:11:01.784 fused_ordering(238) 00:11:01.784 fused_ordering(239) 00:11:01.784 fused_ordering(240) 00:11:01.784 fused_ordering(241) 00:11:01.784 fused_ordering(242) 00:11:01.784 fused_ordering(243) 00:11:01.784 fused_ordering(244) 00:11:01.784 fused_ordering(245) 00:11:01.784 fused_ordering(246) 00:11:01.784 fused_ordering(247) 00:11:01.784 fused_ordering(248) 00:11:01.784 fused_ordering(249) 00:11:01.784 fused_ordering(250) 00:11:01.784 fused_ordering(251) 00:11:01.784 fused_ordering(252) 00:11:01.784 fused_ordering(253) 00:11:01.784 fused_ordering(254) 00:11:01.784 fused_ordering(255) 00:11:01.784 fused_ordering(256) 00:11:01.784 fused_ordering(257) 00:11:01.784 fused_ordering(258) 00:11:01.784 fused_ordering(259) 00:11:01.784 fused_ordering(260) 00:11:01.784 fused_ordering(261) 00:11:01.784 fused_ordering(262) 00:11:01.784 fused_ordering(263) 00:11:01.784 fused_ordering(264) 00:11:01.784 fused_ordering(265) 00:11:01.784 fused_ordering(266) 00:11:01.784 fused_ordering(267) 00:11:01.784 fused_ordering(268) 00:11:01.784 fused_ordering(269) 00:11:01.784 fused_ordering(270) 00:11:01.784 fused_ordering(271) 00:11:01.784 fused_ordering(272) 00:11:01.785 fused_ordering(273) 00:11:01.785 fused_ordering(274) 00:11:01.785 fused_ordering(275) 00:11:01.785 fused_ordering(276) 00:11:01.785 fused_ordering(277) 00:11:01.785 fused_ordering(278) 00:11:01.785 fused_ordering(279) 00:11:01.785 fused_ordering(280) 00:11:01.785 fused_ordering(281) 00:11:01.785 fused_ordering(282) 00:11:01.785 fused_ordering(283) 00:11:01.785 fused_ordering(284) 00:11:01.785 fused_ordering(285) 00:11:01.785 fused_ordering(286) 00:11:01.785 fused_ordering(287) 00:11:01.785 fused_ordering(288) 00:11:01.785 fused_ordering(289) 00:11:01.785 fused_ordering(290) 00:11:01.785 fused_ordering(291) 00:11:01.785 fused_ordering(292) 00:11:01.785 fused_ordering(293) 00:11:01.785 fused_ordering(294) 00:11:01.785 fused_ordering(295) 00:11:01.785 fused_ordering(296) 00:11:01.785 fused_ordering(297) 00:11:01.785 fused_ordering(298) 00:11:01.785 fused_ordering(299) 00:11:01.785 fused_ordering(300) 00:11:01.785 fused_ordering(301) 00:11:01.785 fused_ordering(302) 00:11:01.785 fused_ordering(303) 00:11:01.785 fused_ordering(304) 00:11:01.785 fused_ordering(305) 00:11:01.785 fused_ordering(306) 00:11:01.785 fused_ordering(307) 00:11:01.785 fused_ordering(308) 00:11:01.785 fused_ordering(309) 00:11:01.785 fused_ordering(310) 00:11:01.785 fused_ordering(311) 00:11:01.785 fused_ordering(312) 00:11:01.785 fused_ordering(313) 00:11:01.785 fused_ordering(314) 00:11:01.785 fused_ordering(315) 00:11:01.785 fused_ordering(316) 00:11:01.785 fused_ordering(317) 00:11:01.785 fused_ordering(318) 00:11:01.785 fused_ordering(319) 00:11:01.785 fused_ordering(320) 00:11:01.785 fused_ordering(321) 00:11:01.785 fused_ordering(322) 00:11:01.785 fused_ordering(323) 00:11:01.785 fused_ordering(324) 00:11:01.785 fused_ordering(325) 00:11:01.785 fused_ordering(326) 00:11:01.785 fused_ordering(327) 00:11:01.785 fused_ordering(328) 00:11:01.785 fused_ordering(329) 00:11:01.785 fused_ordering(330) 00:11:01.785 fused_ordering(331) 00:11:01.785 fused_ordering(332) 00:11:01.785 fused_ordering(333) 00:11:01.785 fused_ordering(334) 00:11:01.785 fused_ordering(335) 00:11:01.785 fused_ordering(336) 00:11:01.785 fused_ordering(337) 00:11:01.785 fused_ordering(338) 00:11:01.785 fused_ordering(339) 00:11:01.785 fused_ordering(340) 00:11:01.785 fused_ordering(341) 00:11:01.785 fused_ordering(342) 00:11:01.785 fused_ordering(343) 00:11:01.785 fused_ordering(344) 00:11:01.785 fused_ordering(345) 00:11:01.785 fused_ordering(346) 00:11:01.785 fused_ordering(347) 00:11:01.785 fused_ordering(348) 00:11:01.785 fused_ordering(349) 00:11:01.785 fused_ordering(350) 00:11:01.785 fused_ordering(351) 00:11:01.785 fused_ordering(352) 00:11:01.785 fused_ordering(353) 00:11:01.785 fused_ordering(354) 00:11:01.785 fused_ordering(355) 00:11:01.785 fused_ordering(356) 00:11:01.785 fused_ordering(357) 00:11:01.785 fused_ordering(358) 00:11:01.785 fused_ordering(359) 00:11:01.785 fused_ordering(360) 00:11:01.785 fused_ordering(361) 00:11:01.785 fused_ordering(362) 00:11:01.785 fused_ordering(363) 00:11:01.785 fused_ordering(364) 00:11:01.785 fused_ordering(365) 00:11:01.785 fused_ordering(366) 00:11:01.785 fused_ordering(367) 00:11:01.785 fused_ordering(368) 00:11:01.785 fused_ordering(369) 00:11:01.785 fused_ordering(370) 00:11:01.785 fused_ordering(371) 00:11:01.785 fused_ordering(372) 00:11:01.785 fused_ordering(373) 00:11:01.785 fused_ordering(374) 00:11:01.785 fused_ordering(375) 00:11:01.785 fused_ordering(376) 00:11:01.785 fused_ordering(377) 00:11:01.785 fused_ordering(378) 00:11:01.785 fused_ordering(379) 00:11:01.785 fused_ordering(380) 00:11:01.785 fused_ordering(381) 00:11:01.785 fused_ordering(382) 00:11:01.785 fused_ordering(383) 00:11:01.785 fused_ordering(384) 00:11:01.785 fused_ordering(385) 00:11:01.785 fused_ordering(386) 00:11:01.785 fused_ordering(387) 00:11:01.785 fused_ordering(388) 00:11:01.785 fused_ordering(389) 00:11:01.785 fused_ordering(390) 00:11:01.785 fused_ordering(391) 00:11:01.785 fused_ordering(392) 00:11:01.785 fused_ordering(393) 00:11:01.785 fused_ordering(394) 00:11:01.785 fused_ordering(395) 00:11:01.785 fused_ordering(396) 00:11:01.785 fused_ordering(397) 00:11:01.785 fused_ordering(398) 00:11:01.785 fused_ordering(399) 00:11:01.785 fused_ordering(400) 00:11:01.785 fused_ordering(401) 00:11:01.785 fused_ordering(402) 00:11:01.785 fused_ordering(403) 00:11:01.785 fused_ordering(404) 00:11:01.785 fused_ordering(405) 00:11:01.785 fused_ordering(406) 00:11:01.785 fused_ordering(407) 00:11:01.785 fused_ordering(408) 00:11:01.785 fused_ordering(409) 00:11:01.785 fused_ordering(410) 00:11:02.722 fused_ordering(411) 00:11:02.722 fused_ordering(412) 00:11:02.722 fused_ordering(413) 00:11:02.722 fused_ordering(414) 00:11:02.722 fused_ordering(415) 00:11:02.722 fused_ordering(416) 00:11:02.722 fused_ordering(417) 00:11:02.722 fused_ordering(418) 00:11:02.722 fused_ordering(419) 00:11:02.722 fused_ordering(420) 00:11:02.722 fused_ordering(421) 00:11:02.722 fused_ordering(422) 00:11:02.722 fused_ordering(423) 00:11:02.722 fused_ordering(424) 00:11:02.722 fused_ordering(425) 00:11:02.722 fused_ordering(426) 00:11:02.722 fused_ordering(427) 00:11:02.722 fused_ordering(428) 00:11:02.722 fused_ordering(429) 00:11:02.722 fused_ordering(430) 00:11:02.722 fused_ordering(431) 00:11:02.722 fused_ordering(432) 00:11:02.722 fused_ordering(433) 00:11:02.722 fused_ordering(434) 00:11:02.722 fused_ordering(435) 00:11:02.722 fused_ordering(436) 00:11:02.722 fused_ordering(437) 00:11:02.722 fused_ordering(438) 00:11:02.722 fused_ordering(439) 00:11:02.722 fused_ordering(440) 00:11:02.722 fused_ordering(441) 00:11:02.722 fused_ordering(442) 00:11:02.722 fused_ordering(443) 00:11:02.722 fused_ordering(444) 00:11:02.722 fused_ordering(445) 00:11:02.722 fused_ordering(446) 00:11:02.722 fused_ordering(447) 00:11:02.722 fused_ordering(448) 00:11:02.722 fused_ordering(449) 00:11:02.722 fused_ordering(450) 00:11:02.722 fused_ordering(451) 00:11:02.722 fused_ordering(452) 00:11:02.722 fused_ordering(453) 00:11:02.722 fused_ordering(454) 00:11:02.722 fused_ordering(455) 00:11:02.722 fused_ordering(456) 00:11:02.722 fused_ordering(457) 00:11:02.722 fused_ordering(458) 00:11:02.722 fused_ordering(459) 00:11:02.722 fused_ordering(460) 00:11:02.722 fused_ordering(461) 00:11:02.722 fused_ordering(462) 00:11:02.722 fused_ordering(463) 00:11:02.722 fused_ordering(464) 00:11:02.722 fused_ordering(465) 00:11:02.722 fused_ordering(466) 00:11:02.722 fused_ordering(467) 00:11:02.722 fused_ordering(468) 00:11:02.722 fused_ordering(469) 00:11:02.722 fused_ordering(470) 00:11:02.722 fused_ordering(471) 00:11:02.722 fused_ordering(472) 00:11:02.722 fused_ordering(473) 00:11:02.722 fused_ordering(474) 00:11:02.722 fused_ordering(475) 00:11:02.722 fused_ordering(476) 00:11:02.722 fused_ordering(477) 00:11:02.722 fused_ordering(478) 00:11:02.722 fused_ordering(479) 00:11:02.722 fused_ordering(480) 00:11:02.722 fused_ordering(481) 00:11:02.722 fused_ordering(482) 00:11:02.722 fused_ordering(483) 00:11:02.722 fused_ordering(484) 00:11:02.722 fused_ordering(485) 00:11:02.722 fused_ordering(486) 00:11:02.722 fused_ordering(487) 00:11:02.722 fused_ordering(488) 00:11:02.722 fused_ordering(489) 00:11:02.722 fused_ordering(490) 00:11:02.722 fused_ordering(491) 00:11:02.722 fused_ordering(492) 00:11:02.722 fused_ordering(493) 00:11:02.722 fused_ordering(494) 00:11:02.722 fused_ordering(495) 00:11:02.722 fused_ordering(496) 00:11:02.722 fused_ordering(497) 00:11:02.722 fused_ordering(498) 00:11:02.722 fused_ordering(499) 00:11:02.722 fused_ordering(500) 00:11:02.722 fused_ordering(501) 00:11:02.722 fused_ordering(502) 00:11:02.722 fused_ordering(503) 00:11:02.722 fused_ordering(504) 00:11:02.722 fused_ordering(505) 00:11:02.722 fused_ordering(506) 00:11:02.722 fused_ordering(507) 00:11:02.722 fused_ordering(508) 00:11:02.722 fused_ordering(509) 00:11:02.722 fused_ordering(510) 00:11:02.722 fused_ordering(511) 00:11:02.722 fused_ordering(512) 00:11:02.722 fused_ordering(513) 00:11:02.722 fused_ordering(514) 00:11:02.722 fused_ordering(515) 00:11:02.722 fused_ordering(516) 00:11:02.722 fused_ordering(517) 00:11:02.722 fused_ordering(518) 00:11:02.722 fused_ordering(519) 00:11:02.722 fused_ordering(520) 00:11:02.722 fused_ordering(521) 00:11:02.722 fused_ordering(522) 00:11:02.722 fused_ordering(523) 00:11:02.722 fused_ordering(524) 00:11:02.722 fused_ordering(525) 00:11:02.722 fused_ordering(526) 00:11:02.722 fused_ordering(527) 00:11:02.722 fused_ordering(528) 00:11:02.722 fused_ordering(529) 00:11:02.722 fused_ordering(530) 00:11:02.722 fused_ordering(531) 00:11:02.722 fused_ordering(532) 00:11:02.722 fused_ordering(533) 00:11:02.722 fused_ordering(534) 00:11:02.722 fused_ordering(535) 00:11:02.722 fused_ordering(536) 00:11:02.722 fused_ordering(537) 00:11:02.722 fused_ordering(538) 00:11:02.722 fused_ordering(539) 00:11:02.722 fused_ordering(540) 00:11:02.722 fused_ordering(541) 00:11:02.722 fused_ordering(542) 00:11:02.722 fused_ordering(543) 00:11:02.722 fused_ordering(544) 00:11:02.722 fused_ordering(545) 00:11:02.722 fused_ordering(546) 00:11:02.722 fused_ordering(547) 00:11:02.722 fused_ordering(548) 00:11:02.722 fused_ordering(549) 00:11:02.722 fused_ordering(550) 00:11:02.722 fused_ordering(551) 00:11:02.722 fused_ordering(552) 00:11:02.722 fused_ordering(553) 00:11:02.722 fused_ordering(554) 00:11:02.722 fused_ordering(555) 00:11:02.722 fused_ordering(556) 00:11:02.722 fused_ordering(557) 00:11:02.722 fused_ordering(558) 00:11:02.722 fused_ordering(559) 00:11:02.722 fused_ordering(560) 00:11:02.722 fused_ordering(561) 00:11:02.722 fused_ordering(562) 00:11:02.722 fused_ordering(563) 00:11:02.722 fused_ordering(564) 00:11:02.722 fused_ordering(565) 00:11:02.722 fused_ordering(566) 00:11:02.722 fused_ordering(567) 00:11:02.722 fused_ordering(568) 00:11:02.722 fused_ordering(569) 00:11:02.722 fused_ordering(570) 00:11:02.722 fused_ordering(571) 00:11:02.722 fused_ordering(572) 00:11:02.722 fused_ordering(573) 00:11:02.722 fused_ordering(574) 00:11:02.722 fused_ordering(575) 00:11:02.722 fused_ordering(576) 00:11:02.722 fused_ordering(577) 00:11:02.722 fused_ordering(578) 00:11:02.722 fused_ordering(579) 00:11:02.722 fused_ordering(580) 00:11:02.722 fused_ordering(581) 00:11:02.722 fused_ordering(582) 00:11:02.723 fused_ordering(583) 00:11:02.723 fused_ordering(584) 00:11:02.723 fused_ordering(585) 00:11:02.723 fused_ordering(586) 00:11:02.723 fused_ordering(587) 00:11:02.723 fused_ordering(588) 00:11:02.723 fused_ordering(589) 00:11:02.723 fused_ordering(590) 00:11:02.723 fused_ordering(591) 00:11:02.723 fused_ordering(592) 00:11:02.723 fused_ordering(593) 00:11:02.723 fused_ordering(594) 00:11:02.723 fused_ordering(595) 00:11:02.723 fused_ordering(596) 00:11:02.723 fused_ordering(597) 00:11:02.723 fused_ordering(598) 00:11:02.723 fused_ordering(599) 00:11:02.723 fused_ordering(600) 00:11:02.723 fused_ordering(601) 00:11:02.723 fused_ordering(602) 00:11:02.723 fused_ordering(603) 00:11:02.723 fused_ordering(604) 00:11:02.723 fused_ordering(605) 00:11:02.723 fused_ordering(606) 00:11:02.723 fused_ordering(607) 00:11:02.723 fused_ordering(608) 00:11:02.723 fused_ordering(609) 00:11:02.723 fused_ordering(610) 00:11:02.723 fused_ordering(611) 00:11:02.723 fused_ordering(612) 00:11:02.723 fused_ordering(613) 00:11:02.723 fused_ordering(614) 00:11:02.723 fused_ordering(615) 00:11:03.291 fused_ordering(616) 00:11:03.291 fused_ordering(617) 00:11:03.291 fused_ordering(618) 00:11:03.291 fused_ordering(619) 00:11:03.291 fused_ordering(620) 00:11:03.291 fused_ordering(621) 00:11:03.291 fused_ordering(622) 00:11:03.291 fused_ordering(623) 00:11:03.291 fused_ordering(624) 00:11:03.291 fused_ordering(625) 00:11:03.291 fused_ordering(626) 00:11:03.291 fused_ordering(627) 00:11:03.291 fused_ordering(628) 00:11:03.291 fused_ordering(629) 00:11:03.291 fused_ordering(630) 00:11:03.291 fused_ordering(631) 00:11:03.291 fused_ordering(632) 00:11:03.291 fused_ordering(633) 00:11:03.291 fused_ordering(634) 00:11:03.291 fused_ordering(635) 00:11:03.291 fused_ordering(636) 00:11:03.291 fused_ordering(637) 00:11:03.291 fused_ordering(638) 00:11:03.291 fused_ordering(639) 00:11:03.291 fused_ordering(640) 00:11:03.291 fused_ordering(641) 00:11:03.291 fused_ordering(642) 00:11:03.291 fused_ordering(643) 00:11:03.291 fused_ordering(644) 00:11:03.291 fused_ordering(645) 00:11:03.291 fused_ordering(646) 00:11:03.291 fused_ordering(647) 00:11:03.291 fused_ordering(648) 00:11:03.291 fused_ordering(649) 00:11:03.291 fused_ordering(650) 00:11:03.291 fused_ordering(651) 00:11:03.291 fused_ordering(652) 00:11:03.291 fused_ordering(653) 00:11:03.291 fused_ordering(654) 00:11:03.291 fused_ordering(655) 00:11:03.291 fused_ordering(656) 00:11:03.291 fused_ordering(657) 00:11:03.291 fused_ordering(658) 00:11:03.291 fused_ordering(659) 00:11:03.291 fused_ordering(660) 00:11:03.291 fused_ordering(661) 00:11:03.291 fused_ordering(662) 00:11:03.291 fused_ordering(663) 00:11:03.291 fused_ordering(664) 00:11:03.291 fused_ordering(665) 00:11:03.291 fused_ordering(666) 00:11:03.291 fused_ordering(667) 00:11:03.291 fused_ordering(668) 00:11:03.291 fused_ordering(669) 00:11:03.291 fused_ordering(670) 00:11:03.291 fused_ordering(671) 00:11:03.291 fused_ordering(672) 00:11:03.291 fused_ordering(673) 00:11:03.291 fused_ordering(674) 00:11:03.291 fused_ordering(675) 00:11:03.291 fused_ordering(676) 00:11:03.291 fused_ordering(677) 00:11:03.291 fused_ordering(678) 00:11:03.291 fused_ordering(679) 00:11:03.291 fused_ordering(680) 00:11:03.291 fused_ordering(681) 00:11:03.291 fused_ordering(682) 00:11:03.291 fused_ordering(683) 00:11:03.291 fused_ordering(684) 00:11:03.291 fused_ordering(685) 00:11:03.291 fused_ordering(686) 00:11:03.291 fused_ordering(687) 00:11:03.291 fused_ordering(688) 00:11:03.291 fused_ordering(689) 00:11:03.291 fused_ordering(690) 00:11:03.291 fused_ordering(691) 00:11:03.291 fused_ordering(692) 00:11:03.291 fused_ordering(693) 00:11:03.291 fused_ordering(694) 00:11:03.291 fused_ordering(695) 00:11:03.291 fused_ordering(696) 00:11:03.291 fused_ordering(697) 00:11:03.291 fused_ordering(698) 00:11:03.291 fused_ordering(699) 00:11:03.291 fused_ordering(700) 00:11:03.291 fused_ordering(701) 00:11:03.291 fused_ordering(702) 00:11:03.291 fused_ordering(703) 00:11:03.291 fused_ordering(704) 00:11:03.291 fused_ordering(705) 00:11:03.291 fused_ordering(706) 00:11:03.291 fused_ordering(707) 00:11:03.291 fused_ordering(708) 00:11:03.291 fused_ordering(709) 00:11:03.291 fused_ordering(710) 00:11:03.291 fused_ordering(711) 00:11:03.291 fused_ordering(712) 00:11:03.291 fused_ordering(713) 00:11:03.291 fused_ordering(714) 00:11:03.291 fused_ordering(715) 00:11:03.291 fused_ordering(716) 00:11:03.291 fused_ordering(717) 00:11:03.291 fused_ordering(718) 00:11:03.291 fused_ordering(719) 00:11:03.291 fused_ordering(720) 00:11:03.291 fused_ordering(721) 00:11:03.291 fused_ordering(722) 00:11:03.291 fused_ordering(723) 00:11:03.291 fused_ordering(724) 00:11:03.291 fused_ordering(725) 00:11:03.291 fused_ordering(726) 00:11:03.291 fused_ordering(727) 00:11:03.291 fused_ordering(728) 00:11:03.291 fused_ordering(729) 00:11:03.291 fused_ordering(730) 00:11:03.291 fused_ordering(731) 00:11:03.291 fused_ordering(732) 00:11:03.291 fused_ordering(733) 00:11:03.291 fused_ordering(734) 00:11:03.291 fused_ordering(735) 00:11:03.291 fused_ordering(736) 00:11:03.291 fused_ordering(737) 00:11:03.291 fused_ordering(738) 00:11:03.291 fused_ordering(739) 00:11:03.291 fused_ordering(740) 00:11:03.291 fused_ordering(741) 00:11:03.291 fused_ordering(742) 00:11:03.291 fused_ordering(743) 00:11:03.291 fused_ordering(744) 00:11:03.291 fused_ordering(745) 00:11:03.291 fused_ordering(746) 00:11:03.291 fused_ordering(747) 00:11:03.291 fused_ordering(748) 00:11:03.291 fused_ordering(749) 00:11:03.291 fused_ordering(750) 00:11:03.291 fused_ordering(751) 00:11:03.291 fused_ordering(752) 00:11:03.291 fused_ordering(753) 00:11:03.291 fused_ordering(754) 00:11:03.291 fused_ordering(755) 00:11:03.291 fused_ordering(756) 00:11:03.291 fused_ordering(757) 00:11:03.291 fused_ordering(758) 00:11:03.291 fused_ordering(759) 00:11:03.291 fused_ordering(760) 00:11:03.291 fused_ordering(761) 00:11:03.291 fused_ordering(762) 00:11:03.291 fused_ordering(763) 00:11:03.291 fused_ordering(764) 00:11:03.291 fused_ordering(765) 00:11:03.291 fused_ordering(766) 00:11:03.291 fused_ordering(767) 00:11:03.291 fused_ordering(768) 00:11:03.291 fused_ordering(769) 00:11:03.291 fused_ordering(770) 00:11:03.291 fused_ordering(771) 00:11:03.291 fused_ordering(772) 00:11:03.291 fused_ordering(773) 00:11:03.291 fused_ordering(774) 00:11:03.291 fused_ordering(775) 00:11:03.291 fused_ordering(776) 00:11:03.291 fused_ordering(777) 00:11:03.291 fused_ordering(778) 00:11:03.291 fused_ordering(779) 00:11:03.291 fused_ordering(780) 00:11:03.291 fused_ordering(781) 00:11:03.291 fused_ordering(782) 00:11:03.291 fused_ordering(783) 00:11:03.291 fused_ordering(784) 00:11:03.291 fused_ordering(785) 00:11:03.291 fused_ordering(786) 00:11:03.291 fused_ordering(787) 00:11:03.291 fused_ordering(788) 00:11:03.291 fused_ordering(789) 00:11:03.291 fused_ordering(790) 00:11:03.291 fused_ordering(791) 00:11:03.291 fused_ordering(792) 00:11:03.291 fused_ordering(793) 00:11:03.291 fused_ordering(794) 00:11:03.291 fused_ordering(795) 00:11:03.291 fused_ordering(796) 00:11:03.291 fused_ordering(797) 00:11:03.291 fused_ordering(798) 00:11:03.291 fused_ordering(799) 00:11:03.291 fused_ordering(800) 00:11:03.291 fused_ordering(801) 00:11:03.291 fused_ordering(802) 00:11:03.291 fused_ordering(803) 00:11:03.291 fused_ordering(804) 00:11:03.291 fused_ordering(805) 00:11:03.291 fused_ordering(806) 00:11:03.291 fused_ordering(807) 00:11:03.291 fused_ordering(808) 00:11:03.291 fused_ordering(809) 00:11:03.291 fused_ordering(810) 00:11:03.291 fused_ordering(811) 00:11:03.291 fused_ordering(812) 00:11:03.291 fused_ordering(813) 00:11:03.291 fused_ordering(814) 00:11:03.291 fused_ordering(815) 00:11:03.291 fused_ordering(816) 00:11:03.291 fused_ordering(817) 00:11:03.291 fused_ordering(818) 00:11:03.291 fused_ordering(819) 00:11:03.291 fused_ordering(820) 00:11:04.229 fused_ordering(821) 00:11:04.229 fused_ordering(822) 00:11:04.229 fused_ordering(823) 00:11:04.229 fused_ordering(824) 00:11:04.229 fused_ordering(825) 00:11:04.229 fused_ordering(826) 00:11:04.229 fused_ordering(827) 00:11:04.229 fused_ordering(828) 00:11:04.230 fused_ordering(829) 00:11:04.230 fused_ordering(830) 00:11:04.230 fused_ordering(831) 00:11:04.230 fused_ordering(832) 00:11:04.230 fused_ordering(833) 00:11:04.230 fused_ordering(834) 00:11:04.230 fused_ordering(835) 00:11:04.230 fused_ordering(836) 00:11:04.230 fused_ordering(837) 00:11:04.230 fused_ordering(838) 00:11:04.230 fused_ordering(839) 00:11:04.230 fused_ordering(840) 00:11:04.230 fused_ordering(841) 00:11:04.230 fused_ordering(842) 00:11:04.230 fused_ordering(843) 00:11:04.230 fused_ordering(844) 00:11:04.230 fused_ordering(845) 00:11:04.230 fused_ordering(846) 00:11:04.230 fused_ordering(847) 00:11:04.230 fused_ordering(848) 00:11:04.230 fused_ordering(849) 00:11:04.230 fused_ordering(850) 00:11:04.230 fused_ordering(851) 00:11:04.230 fused_ordering(852) 00:11:04.230 fused_ordering(853) 00:11:04.230 fused_ordering(854) 00:11:04.230 fused_ordering(855) 00:11:04.230 fused_ordering(856) 00:11:04.230 fused_ordering(857) 00:11:04.230 fused_ordering(858) 00:11:04.230 fused_ordering(859) 00:11:04.230 fused_ordering(860) 00:11:04.230 fused_ordering(861) 00:11:04.230 fused_ordering(862) 00:11:04.230 fused_ordering(863) 00:11:04.230 fused_ordering(864) 00:11:04.230 fused_ordering(865) 00:11:04.230 fused_ordering(866) 00:11:04.230 fused_ordering(867) 00:11:04.230 fused_ordering(868) 00:11:04.230 fused_ordering(869) 00:11:04.230 fused_ordering(870) 00:11:04.230 fused_ordering(871) 00:11:04.230 fused_ordering(872) 00:11:04.230 fused_ordering(873) 00:11:04.230 fused_ordering(874) 00:11:04.230 fused_ordering(875) 00:11:04.230 fused_ordering(876) 00:11:04.230 fused_ordering(877) 00:11:04.230 fused_ordering(878) 00:11:04.230 fused_ordering(879) 00:11:04.230 fused_ordering(880) 00:11:04.230 fused_ordering(881) 00:11:04.230 fused_ordering(882) 00:11:04.230 fused_ordering(883) 00:11:04.230 fused_ordering(884) 00:11:04.230 fused_ordering(885) 00:11:04.230 fused_ordering(886) 00:11:04.230 fused_ordering(887) 00:11:04.230 fused_ordering(888) 00:11:04.230 fused_ordering(889) 00:11:04.230 fused_ordering(890) 00:11:04.230 fused_ordering(891) 00:11:04.230 fused_ordering(892) 00:11:04.230 fused_ordering(893) 00:11:04.230 fused_ordering(894) 00:11:04.230 fused_ordering(895) 00:11:04.230 fused_ordering(896) 00:11:04.230 fused_ordering(897) 00:11:04.230 fused_ordering(898) 00:11:04.230 fused_ordering(899) 00:11:04.230 fused_ordering(900) 00:11:04.230 fused_ordering(901) 00:11:04.230 fused_ordering(902) 00:11:04.230 fused_ordering(903) 00:11:04.230 fused_ordering(904) 00:11:04.230 fused_ordering(905) 00:11:04.230 fused_ordering(906) 00:11:04.230 fused_ordering(907) 00:11:04.230 fused_ordering(908) 00:11:04.230 fused_ordering(909) 00:11:04.230 fused_ordering(910) 00:11:04.230 fused_ordering(911) 00:11:04.230 fused_ordering(912) 00:11:04.230 fused_ordering(913) 00:11:04.230 fused_ordering(914) 00:11:04.230 fused_ordering(915) 00:11:04.230 fused_ordering(916) 00:11:04.230 fused_ordering(917) 00:11:04.230 fused_ordering(918) 00:11:04.230 fused_ordering(919) 00:11:04.230 fused_ordering(920) 00:11:04.230 fused_ordering(921) 00:11:04.230 fused_ordering(922) 00:11:04.230 fused_ordering(923) 00:11:04.230 fused_ordering(924) 00:11:04.230 fused_ordering(925) 00:11:04.230 fused_ordering(926) 00:11:04.230 fused_ordering(927) 00:11:04.230 fused_ordering(928) 00:11:04.230 fused_ordering(929) 00:11:04.230 fused_ordering(930) 00:11:04.230 fused_ordering(931) 00:11:04.230 fused_ordering(932) 00:11:04.230 fused_ordering(933) 00:11:04.230 fused_ordering(934) 00:11:04.230 fused_ordering(935) 00:11:04.230 fused_ordering(936) 00:11:04.230 fused_ordering(937) 00:11:04.230 fused_ordering(938) 00:11:04.230 fused_ordering(939) 00:11:04.230 fused_ordering(940) 00:11:04.230 fused_ordering(941) 00:11:04.230 fused_ordering(942) 00:11:04.230 fused_ordering(943) 00:11:04.230 fused_ordering(944) 00:11:04.230 fused_ordering(945) 00:11:04.230 fused_ordering(946) 00:11:04.230 fused_ordering(947) 00:11:04.230 fused_ordering(948) 00:11:04.230 fused_ordering(949) 00:11:04.230 fused_ordering(950) 00:11:04.230 fused_ordering(951) 00:11:04.230 fused_ordering(952) 00:11:04.230 fused_ordering(953) 00:11:04.230 fused_ordering(954) 00:11:04.230 fused_ordering(955) 00:11:04.230 fused_ordering(956) 00:11:04.230 fused_ordering(957) 00:11:04.230 fused_ordering(958) 00:11:04.230 fused_ordering(959) 00:11:04.230 fused_ordering(960) 00:11:04.230 fused_ordering(961) 00:11:04.230 fused_ordering(962) 00:11:04.230 fused_ordering(963) 00:11:04.230 fused_ordering(964) 00:11:04.230 fused_ordering(965) 00:11:04.230 fused_ordering(966) 00:11:04.230 fused_ordering(967) 00:11:04.230 fused_ordering(968) 00:11:04.230 fused_ordering(969) 00:11:04.230 fused_ordering(970) 00:11:04.230 fused_ordering(971) 00:11:04.230 fused_ordering(972) 00:11:04.230 fused_ordering(973) 00:11:04.230 fused_ordering(974) 00:11:04.230 fused_ordering(975) 00:11:04.230 fused_ordering(976) 00:11:04.230 fused_ordering(977) 00:11:04.230 fused_ordering(978) 00:11:04.230 fused_ordering(979) 00:11:04.230 fused_ordering(980) 00:11:04.230 fused_ordering(981) 00:11:04.230 fused_ordering(982) 00:11:04.230 fused_ordering(983) 00:11:04.230 fused_ordering(984) 00:11:04.230 fused_ordering(985) 00:11:04.230 fused_ordering(986) 00:11:04.230 fused_ordering(987) 00:11:04.230 fused_ordering(988) 00:11:04.230 fused_ordering(989) 00:11:04.230 fused_ordering(990) 00:11:04.230 fused_ordering(991) 00:11:04.230 fused_ordering(992) 00:11:04.230 fused_ordering(993) 00:11:04.230 fused_ordering(994) 00:11:04.230 fused_ordering(995) 00:11:04.230 fused_ordering(996) 00:11:04.230 fused_ordering(997) 00:11:04.230 fused_ordering(998) 00:11:04.230 fused_ordering(999) 00:11:04.230 fused_ordering(1000) 00:11:04.230 fused_ordering(1001) 00:11:04.230 fused_ordering(1002) 00:11:04.230 fused_ordering(1003) 00:11:04.230 fused_ordering(1004) 00:11:04.230 fused_ordering(1005) 00:11:04.230 fused_ordering(1006) 00:11:04.230 fused_ordering(1007) 00:11:04.230 fused_ordering(1008) 00:11:04.230 fused_ordering(1009) 00:11:04.230 fused_ordering(1010) 00:11:04.230 fused_ordering(1011) 00:11:04.230 fused_ordering(1012) 00:11:04.230 fused_ordering(1013) 00:11:04.230 fused_ordering(1014) 00:11:04.230 fused_ordering(1015) 00:11:04.230 fused_ordering(1016) 00:11:04.230 fused_ordering(1017) 00:11:04.230 fused_ordering(1018) 00:11:04.230 fused_ordering(1019) 00:11:04.230 fused_ordering(1020) 00:11:04.230 fused_ordering(1021) 00:11:04.230 fused_ordering(1022) 00:11:04.230 fused_ordering(1023) 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.230 rmmod nvme_tcp 00:11:04.230 rmmod nvme_fabrics 00:11:04.230 rmmod nvme_keyring 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3657326 ']' 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3657326 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3657326 ']' 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3657326 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:04.230 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3657326 00:11:04.490 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:04.490 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:04.490 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3657326' 00:11:04.490 killing process with pid 3657326 00:11:04.490 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3657326 00:11:04.490 [2024-05-15 15:49:02.833988] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:04.490 15:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3657326 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.490 15:49:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.025 15:49:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.025 00:11:07.025 real 0m14.171s 00:11:07.025 user 0m8.196s 00:11:07.025 sys 0m8.333s 00:11:07.025 15:49:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.025 15:49:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.025 ************************************ 00:11:07.025 END TEST nvmf_fused_ordering 00:11:07.025 ************************************ 00:11:07.025 15:49:05 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:07.025 15:49:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:07.025 15:49:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.025 15:49:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.025 ************************************ 00:11:07.025 START TEST nvmf_delete_subsystem 00:11:07.025 ************************************ 00:11:07.025 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:07.025 * Looking for test storage... 00:11:07.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.026 15:49:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.598 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:13.599 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:13.599 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:13.599 Found net devices under 0000:af:00.0: cvl_0_0 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:13.599 Found net devices under 0000:af:00.1: cvl_0_1 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:11:13.599 00:11:13.599 --- 10.0.0.2 ping statistics --- 00:11:13.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.599 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:11:13.599 00:11:13.599 --- 10.0.0.1 ping statistics --- 00:11:13.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.599 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3661755 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3661755 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3661755 ']' 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:13.599 15:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.599 [2024-05-15 15:49:11.787104] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:11:13.599 [2024-05-15 15:49:11.787150] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.599 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.599 [2024-05-15 15:49:11.859664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:13.599 [2024-05-15 15:49:11.934865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.599 [2024-05-15 15:49:11.934900] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.599 [2024-05-15 15:49:11.934910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.599 [2024-05-15 15:49:11.934918] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.599 [2024-05-15 15:49:11.934925] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.599 [2024-05-15 15:49:11.934968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.599 [2024-05-15 15:49:11.934971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.168 [2024-05-15 15:49:12.647310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.168 [2024-05-15 15:49:12.663296] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:14.168 [2024-05-15 15:49:12.663491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.168 NULL1 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.168 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.169 Delay0 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3661861 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:14.169 15:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:14.169 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.428 [2024-05-15 15:49:12.748123] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:16.344 15:49:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.344 15:49:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.344 15:49:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 starting I/O failed: -6 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 starting I/O failed: -6 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 starting I/O failed: -6 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 starting I/O failed: -6 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 starting I/O failed: -6 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.623 Write completed with error (sct=0, sc=8) 00:11:16.623 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 [2024-05-15 15:49:14.918453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9980 is same with the state(5) to be set 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 starting I/O failed: -6 00:11:16.624 [2024-05-15 15:49:14.919410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3a6800c600 is same with the state(5) to be set 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Write completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:16.624 Read completed with error (sct=0, sc=8) 00:11:17.560 [2024-05-15 15:49:15.886907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ac420 is same with the state(5) to be set 00:11:17.560 Read completed with error (sct=0, sc=8) 00:11:17.560 Read completed with error (sct=0, sc=8) 00:11:17.560 Read completed with error (sct=0, sc=8) 00:11:17.560 Write completed with error (sct=0, sc=8) 00:11:17.560 Write completed with error (sct=0, sc=8) 00:11:17.560 Read completed with error (sct=0, sc=8) 00:11:17.560 Write completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 [2024-05-15 15:49:15.921202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3a6800c2f0 is same with the state(5) to be set 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 [2024-05-15 15:49:15.922492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abc40 is same with the state(5) to be set 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 [2024-05-15 15:49:15.922738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9b60 is same with the state(5) to be set 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 Write completed with error (sct=0, sc=8) 00:11:17.561 Read completed with error (sct=0, sc=8) 00:11:17.561 [2024-05-15 15:49:15.922858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abe20 is same with the state(5) to be set 00:11:17.561 Initializing NVMe Controllers 00:11:17.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:17.561 Controller IO queue size 128, less than required. 00:11:17.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:17.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:17.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:17.561 Initialization complete. Launching workers. 00:11:17.561 ======================================================== 00:11:17.561 Latency(us) 00:11:17.561 Device Information : IOPS MiB/s Average min max 00:11:17.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.89 0.08 977470.34 1117.91 1045643.74 00:11:17.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.49 0.07 905856.07 236.76 1012185.71 00:11:17.561 ======================================================== 00:11:17.561 Total : 311.38 0.15 943319.35 236.76 1045643.74 00:11:17.561 00:11:17.561 [2024-05-15 15:49:15.923462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac420 (9): Bad file descriptor 00:11:17.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:17.561 15:49:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.561 15:49:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:17.561 15:49:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3661861 00:11:17.561 15:49:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3661861 00:11:18.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3661861) - No such process 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3661861 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3661861 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3661861 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 [2024-05-15 15:49:16.449450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3662599 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:18.131 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:18.131 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.131 [2024-05-15 15:49:16.519196] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:18.700 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:18.700 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:18.700 15:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:18.960 15:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:18.960 15:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:18.960 15:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:19.530 15:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:19.530 15:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:19.530 15:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.135 15:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.135 15:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:20.135 15:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.708 15:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.708 15:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:20.708 15:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.967 15:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.967 15:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:20.967 15:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.226 Initializing NVMe Controllers 00:11:21.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.226 Controller IO queue size 128, less than required. 00:11:21.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:21.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:21.226 Initialization complete. Launching workers. 00:11:21.226 ======================================================== 00:11:21.226 Latency(us) 00:11:21.226 Device Information : IOPS MiB/s Average min max 00:11:21.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003650.84 1000343.63 1010524.14 00:11:21.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004949.30 1000376.80 1013141.40 00:11:21.226 ======================================================== 00:11:21.226 Total : 256.00 0.12 1004300.07 1000343.63 1013141.40 00:11:21.226 00:11:21.487 15:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:21.487 15:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3662599 00:11:21.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3662599) - No such process 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3662599 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:21.487 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:21.487 rmmod nvme_tcp 00:11:21.487 rmmod nvme_fabrics 00:11:21.487 rmmod nvme_keyring 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3661755 ']' 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3661755 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3661755 ']' 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3661755 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3661755 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3661755' 00:11:21.746 killing process with pid 3661755 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3661755 00:11:21.746 [2024-05-15 15:49:20.123409] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:21.746 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3661755 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.005 15:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.911 15:49:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.911 00:11:23.911 real 0m17.206s 00:11:23.911 user 0m29.711s 00:11:23.911 sys 0m6.631s 00:11:23.911 15:49:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:23.911 15:49:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.911 ************************************ 00:11:23.911 END TEST nvmf_delete_subsystem 00:11:23.911 ************************************ 00:11:23.911 15:49:22 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:23.911 15:49:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:23.911 15:49:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:23.911 15:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:24.171 ************************************ 00:11:24.171 START TEST nvmf_ns_masking 00:11:24.171 ************************************ 00:11:24.171 15:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:24.171 * Looking for test storage... 00:11:24.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.171 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.171 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:24.171 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=ed1e1b9d-16ae-4c89-b246-a3cbbd2b9318 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:24.172 15:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.744 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:30.744 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:30.745 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:30.745 Found net devices under 0000:af:00.0: cvl_0_0 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:30.745 Found net devices under 0000:af:00.1: cvl_0_1 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:30.745 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:11:31.005 00:11:31.005 --- 10.0.0.2 ping statistics --- 00:11:31.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.005 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:11:31.005 00:11:31.005 --- 10.0.0.1 ping statistics --- 00:11:31.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.005 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3666892 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3666892 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3666892 ']' 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:31.005 15:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:31.005 [2024-05-15 15:49:29.496968] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:11:31.005 [2024-05-15 15:49:29.497015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.005 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.005 [2024-05-15 15:49:29.563841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.265 [2024-05-15 15:49:29.634671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.265 [2024-05-15 15:49:29.634711] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.265 [2024-05-15 15:49:29.634720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.265 [2024-05-15 15:49:29.634728] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.265 [2024-05-15 15:49:29.634751] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.265 [2024-05-15 15:49:29.634799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.265 [2024-05-15 15:49:29.634892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.265 [2024-05-15 15:49:29.634979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.265 [2024-05-15 15:49:29.634981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.834 15:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:31.834 15:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:11:31.834 15:49:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.834 15:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.834 15:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:31.834 15:49:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.834 15:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.093 [2024-05-15 15:49:30.518556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.093 15:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:32.093 15:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:32.093 15:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:32.353 Malloc1 00:11:32.353 15:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:32.353 Malloc2 00:11:32.612 15:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.612 15:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:32.871 15:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.871 [2024-05-15 15:49:31.425472] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:32.871 [2024-05-15 15:49:31.425757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.130 15:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:33.130 15:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ed1e1b9d-16ae-4c89-b246-a3cbbd2b9318 -a 10.0.0.2 -s 4420 -i 4 00:11:33.130 15:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.130 15:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:33.130 15:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.130 15:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:33.130 15:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:35.040 15:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:35.040 15:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:35.040 15:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:35.303 [ 0]:0x1 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4303b23e35b498ba6cee1f22255176c 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4303b23e35b498ba6cee1f22255176c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.303 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:35.562 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:35.562 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:35.562 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:35.562 [ 0]:0x1 00:11:35.562 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.562 15:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4303b23e35b498ba6cee1f22255176c 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4303b23e35b498ba6cee1f22255176c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:35.562 [ 1]:0x2 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=96d8214710544b038ba0feda6eb6c475 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 96d8214710544b038ba0feda6eb6c475 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:35.562 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.821 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.080 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:36.339 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:36.339 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ed1e1b9d-16ae-4c89-b246-a3cbbd2b9318 -a 10.0.0.2 -s 4420 -i 4 00:11:36.598 15:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:36.598 15:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:36.598 15:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.598 15:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:36.598 15:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:36.598 15:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.530 15:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:38.530 [ 0]:0x2 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:38.530 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=96d8214710544b038ba0feda6eb6c475 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 96d8214710544b038ba0feda6eb6c475 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:38.790 [ 0]:0x1 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4303b23e35b498ba6cee1f22255176c 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4303b23e35b498ba6cee1f22255176c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:38.790 [ 1]:0x2 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.790 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=96d8214710544b038ba0feda6eb6c475 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 96d8214710544b038ba0feda6eb6c475 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:39.049 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:39.050 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:39.050 15:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:39.050 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:39.309 [ 0]:0x2 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=96d8214710544b038ba0feda6eb6c475 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 96d8214710544b038ba0feda6eb6c475 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.309 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:39.569 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:39.569 15:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ed1e1b9d-16ae-4c89-b246-a3cbbd2b9318 -a 10.0.0.2 -s 4420 -i 4 00:11:39.569 15:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:39.569 15:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:39.569 15:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.569 15:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:39.569 15:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:39.569 15:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:41.475 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:41.475 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:41.475 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.475 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:41.475 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.475 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:41.734 [ 0]:0x1 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4303b23e35b498ba6cee1f22255176c 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4303b23e35b498ba6cee1f22255176c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:41.734 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:41.993 [ 1]:0x2 00:11:41.993 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.993 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:41.993 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=96d8214710544b038ba0feda6eb6c475 00:11:41.993 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 96d8214710544b038ba0feda6eb6c475 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.994 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:42.254 [ 0]:0x2 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=96d8214710544b038ba0feda6eb6c475 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 96d8214710544b038ba0feda6eb6c475 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:42.254 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.514 [2024-05-15 15:49:40.840172] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:42.514 request: 00:11:42.514 { 00:11:42.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.514 "nsid": 2, 00:11:42.514 "host": "nqn.2016-06.io.spdk:host1", 00:11:42.514 "method": "nvmf_ns_remove_host", 00:11:42.514 "req_id": 1 00:11:42.514 } 00:11:42.514 Got JSON-RPC error response 00:11:42.514 response: 00:11:42.514 { 00:11:42.514 "code": -32602, 00:11:42.514 "message": "Invalid parameters" 00:11:42.514 } 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:42.514 15:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:42.514 [ 0]:0x2 00:11:42.514 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.514 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:42.774 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=96d8214710544b038ba0feda6eb6c475 00:11:42.774 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 96d8214710544b038ba0feda6eb6c475 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.774 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:42.774 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.774 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.033 rmmod nvme_tcp 00:11:43.033 rmmod nvme_fabrics 00:11:43.033 rmmod nvme_keyring 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3666892 ']' 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3666892 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3666892 ']' 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3666892 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3666892 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3666892' 00:11:43.033 killing process with pid 3666892 00:11:43.033 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3666892 00:11:43.033 [2024-05-15 15:49:41.543340] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:43.034 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3666892 00:11:43.296 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.296 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.296 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.297 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.297 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.297 15:49:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.297 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.297 15:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.836 15:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:45.836 00:11:45.836 real 0m21.380s 00:11:45.836 user 0m52.140s 00:11:45.836 sys 0m7.743s 00:11:45.837 15:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:45.837 15:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:45.837 ************************************ 00:11:45.837 END TEST nvmf_ns_masking 00:11:45.837 ************************************ 00:11:45.837 15:49:43 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:45.837 15:49:43 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:45.837 15:49:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:45.837 15:49:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:45.837 15:49:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.837 ************************************ 00:11:45.837 START TEST nvmf_nvme_cli 00:11:45.837 ************************************ 00:11:45.837 15:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:45.837 * Looking for test storage... 00:11:45.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.837 15:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:52.411 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:52.411 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:52.411 Found net devices under 0000:af:00.0: cvl_0_0 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:52.411 Found net devices under 0000:af:00.1: cvl_0_1 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.411 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:11:52.412 00:11:52.412 --- 10.0.0.2 ping statistics --- 00:11:52.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.412 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:52.412 00:11:52.412 --- 10.0.0.1 ping statistics --- 00:11:52.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.412 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3672868 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3672868 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3672868 ']' 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:52.412 15:49:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.412 [2024-05-15 15:49:50.843473] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:11:52.412 [2024-05-15 15:49:50.843519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.412 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.412 [2024-05-15 15:49:50.919362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.672 [2024-05-15 15:49:50.995578] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.672 [2024-05-15 15:49:50.995616] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.672 [2024-05-15 15:49:50.995625] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.672 [2024-05-15 15:49:50.995634] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.672 [2024-05-15 15:49:50.995641] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.672 [2024-05-15 15:49:50.995686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.672 [2024-05-15 15:49:50.995781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.672 [2024-05-15 15:49:50.995867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.672 [2024-05-15 15:49:50.995869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.240 [2024-05-15 15:49:51.701968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.240 Malloc0 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.240 Malloc1 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.240 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.241 [2024-05-15 15:49:51.786227] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:53.241 [2024-05-15 15:49:51.786502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.241 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:11:53.500 00:11:53.500 Discovery Log Number of Records 2, Generation counter 2 00:11:53.500 =====Discovery Log Entry 0====== 00:11:53.500 trtype: tcp 00:11:53.500 adrfam: ipv4 00:11:53.500 subtype: current discovery subsystem 00:11:53.500 treq: not required 00:11:53.500 portid: 0 00:11:53.500 trsvcid: 4420 00:11:53.500 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:53.500 traddr: 10.0.0.2 00:11:53.500 eflags: explicit discovery connections, duplicate discovery information 00:11:53.500 sectype: none 00:11:53.500 =====Discovery Log Entry 1====== 00:11:53.500 trtype: tcp 00:11:53.500 adrfam: ipv4 00:11:53.500 subtype: nvme subsystem 00:11:53.500 treq: not required 00:11:53.500 portid: 0 00:11:53.500 trsvcid: 4420 00:11:53.500 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:53.500 traddr: 10.0.0.2 00:11:53.500 eflags: none 00:11:53.500 sectype: none 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:53.500 15:49:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.880 15:49:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:54.880 15:49:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:11:54.880 15:49:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.880 15:49:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:54.880 15:49:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:54.880 15:49:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:57.416 /dev/nvme0n1 ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:57.416 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.676 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:57.676 15:49:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:57.676 rmmod nvme_tcp 00:11:57.676 rmmod nvme_fabrics 00:11:57.676 rmmod nvme_keyring 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3672868 ']' 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3672868 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3672868 ']' 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3672868 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3672868 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3672868' 00:11:57.676 killing process with pid 3672868 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3672868 00:11:57.676 [2024-05-15 15:49:56.136069] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:57.676 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3672868 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.936 15:49:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.475 15:49:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:00.475 00:12:00.475 real 0m14.504s 00:12:00.475 user 0m23.072s 00:12:00.475 sys 0m5.955s 00:12:00.475 15:49:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:00.475 15:49:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:00.475 ************************************ 00:12:00.475 END TEST nvmf_nvme_cli 00:12:00.475 ************************************ 00:12:00.475 15:49:58 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:00.475 15:49:58 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:00.475 15:49:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:00.475 15:49:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:00.475 15:49:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.475 ************************************ 00:12:00.475 START TEST nvmf_vfio_user 00:12:00.475 ************************************ 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:00.475 * Looking for test storage... 00:12:00.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:00.475 15:49:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3674333 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3674333' 00:12:00.476 Process pid: 3674333 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3674333 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3674333 ']' 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:00.476 15:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:00.476 [2024-05-15 15:49:58.756224] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:00.476 [2024-05-15 15:49:58.756280] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.476 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.476 [2024-05-15 15:49:58.826400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.476 [2024-05-15 15:49:58.899876] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.476 [2024-05-15 15:49:58.899916] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.476 [2024-05-15 15:49:58.899924] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.476 [2024-05-15 15:49:58.899932] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.476 [2024-05-15 15:49:58.899940] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.476 [2024-05-15 15:49:58.900000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.476 [2024-05-15 15:49:58.900096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.476 [2024-05-15 15:49:58.900179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.476 [2024-05-15 15:49:58.900181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.077 15:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:01.077 15:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:01.077 15:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:02.023 15:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:02.282 15:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:02.282 15:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:02.282 15:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:02.282 15:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:02.282 15:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:02.541 Malloc1 00:12:02.541 15:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:02.800 15:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:02.800 15:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:03.059 [2024-05-15 15:50:01.493690] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:03.059 15:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:03.059 15:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:03.059 15:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:03.318 Malloc2 00:12:03.318 15:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:03.577 15:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:03.577 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:03.838 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:03.838 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:03.838 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:03.838 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:03.838 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:03.838 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:03.838 [2024-05-15 15:50:02.291739] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:03.838 [2024-05-15 15:50:02.291787] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674899 ] 00:12:03.838 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.838 [2024-05-15 15:50:02.323543] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:03.838 [2024-05-15 15:50:02.333588] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:03.838 [2024-05-15 15:50:02.333609] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2e3dc68000 00:12:03.838 [2024-05-15 15:50:02.334584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:03.838 [2024-05-15 15:50:02.335587] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:03.838 [2024-05-15 15:50:02.336592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:03.838 [2024-05-15 15:50:02.337597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:03.838 [2024-05-15 15:50:02.338603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:03.838 [2024-05-15 15:50:02.339606] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:03.838 [2024-05-15 15:50:02.340612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:03.838 [2024-05-15 15:50:02.341612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:03.839 [2024-05-15 15:50:02.342627] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:03.839 [2024-05-15 15:50:02.342640] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2e3dc5d000 00:12:03.839 [2024-05-15 15:50:02.343535] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:03.839 [2024-05-15 15:50:02.355829] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:03.839 [2024-05-15 15:50:02.355860] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:03.839 [2024-05-15 15:50:02.358732] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:03.839 [2024-05-15 15:50:02.358773] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:03.839 [2024-05-15 15:50:02.358912] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:03.839 [2024-05-15 15:50:02.358930] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:03.839 [2024-05-15 15:50:02.358938] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:03.839 [2024-05-15 15:50:02.359729] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:03.839 [2024-05-15 15:50:02.359740] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:03.839 [2024-05-15 15:50:02.359749] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:03.839 [2024-05-15 15:50:02.360733] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:03.839 [2024-05-15 15:50:02.360743] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:03.839 [2024-05-15 15:50:02.360752] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:03.839 [2024-05-15 15:50:02.361739] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:03.839 [2024-05-15 15:50:02.361749] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:03.839 [2024-05-15 15:50:02.362744] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:03.839 [2024-05-15 15:50:02.362753] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:03.839 [2024-05-15 15:50:02.362759] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:03.839 [2024-05-15 15:50:02.362768] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:03.839 [2024-05-15 15:50:02.362874] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:03.839 [2024-05-15 15:50:02.362881] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:03.839 [2024-05-15 15:50:02.362887] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:03.839 [2024-05-15 15:50:02.363753] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:03.839 [2024-05-15 15:50:02.364761] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:03.839 [2024-05-15 15:50:02.365764] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:03.839 [2024-05-15 15:50:02.366763] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:03.839 [2024-05-15 15:50:02.366831] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:03.839 [2024-05-15 15:50:02.367776] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:03.839 [2024-05-15 15:50:02.367786] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:03.839 [2024-05-15 15:50:02.367792] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:03.839 [2024-05-15 15:50:02.367811] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:03.839 [2024-05-15 15:50:02.367827] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:03.839 [2024-05-15 15:50:02.367845] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:03.839 [2024-05-15 15:50:02.367851] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.839 [2024-05-15 15:50:02.367866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.839 [2024-05-15 15:50:02.367910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:03.839 [2024-05-15 15:50:02.367921] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:03.839 [2024-05-15 15:50:02.367927] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:03.839 [2024-05-15 15:50:02.367933] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:03.839 [2024-05-15 15:50:02.367939] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:03.839 [2024-05-15 15:50:02.367945] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:03.839 [2024-05-15 15:50:02.367951] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:03.839 [2024-05-15 15:50:02.367957] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:03.839 [2024-05-15 15:50:02.367970] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:03.839 [2024-05-15 15:50:02.367983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:03.839 [2024-05-15 15:50:02.367998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:03.839 [2024-05-15 15:50:02.368011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.839 [2024-05-15 15:50:02.368021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.839 [2024-05-15 15:50:02.368029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.839 [2024-05-15 15:50:02.368038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.839 [2024-05-15 15:50:02.368044] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:03.839 [2024-05-15 15:50:02.368053] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368080] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:03.840 [2024-05-15 15:50:02.368088] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368096] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368172] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368195] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:03.840 [2024-05-15 15:50:02.368201] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:03.840 [2024-05-15 15:50:02.368208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368235] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:03.840 [2024-05-15 15:50:02.368245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368254] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368262] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:03.840 [2024-05-15 15:50:02.368268] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.840 [2024-05-15 15:50:02.368274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368309] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368317] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:03.840 [2024-05-15 15:50:02.368323] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.840 [2024-05-15 15:50:02.368330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368353] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368362] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368370] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368379] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368386] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368392] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:03.840 [2024-05-15 15:50:02.368398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:03.840 [2024-05-15 15:50:02.368405] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:03.840 [2024-05-15 15:50:02.368425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368512] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:03.840 [2024-05-15 15:50:02.368518] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:03.840 [2024-05-15 15:50:02.368522] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:03.840 [2024-05-15 15:50:02.368527] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:03.840 [2024-05-15 15:50:02.368534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:03.840 [2024-05-15 15:50:02.368542] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:03.840 [2024-05-15 15:50:02.368547] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:03.840 [2024-05-15 15:50:02.368554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368562] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:03.840 [2024-05-15 15:50:02.368568] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.840 [2024-05-15 15:50:02.368574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368584] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:03.840 [2024-05-15 15:50:02.368590] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:03.840 [2024-05-15 15:50:02.368596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:03.840 [2024-05-15 15:50:02.368606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:03.840 [2024-05-15 15:50:02.368642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:03.840 ===================================================== 00:12:03.840 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:03.840 ===================================================== 00:12:03.840 Controller Capabilities/Features 00:12:03.840 ================================ 00:12:03.840 Vendor ID: 4e58 00:12:03.840 Subsystem Vendor ID: 4e58 00:12:03.840 Serial Number: SPDK1 00:12:03.840 Model Number: SPDK bdev Controller 00:12:03.840 Firmware Version: 24.05 00:12:03.840 Recommended Arb Burst: 6 00:12:03.840 IEEE OUI Identifier: 8d 6b 50 00:12:03.840 Multi-path I/O 00:12:03.840 May have multiple subsystem ports: Yes 00:12:03.840 May have multiple controllers: Yes 00:12:03.840 Associated with SR-IOV VF: No 00:12:03.840 Max Data Transfer Size: 131072 00:12:03.840 Max Number of Namespaces: 32 00:12:03.840 Max Number of I/O Queues: 127 00:12:03.840 NVMe Specification Version (VS): 1.3 00:12:03.841 NVMe Specification Version (Identify): 1.3 00:12:03.841 Maximum Queue Entries: 256 00:12:03.841 Contiguous Queues Required: Yes 00:12:03.841 Arbitration Mechanisms Supported 00:12:03.841 Weighted Round Robin: Not Supported 00:12:03.841 Vendor Specific: Not Supported 00:12:03.841 Reset Timeout: 15000 ms 00:12:03.841 Doorbell Stride: 4 bytes 00:12:03.841 NVM Subsystem Reset: Not Supported 00:12:03.841 Command Sets Supported 00:12:03.841 NVM Command Set: Supported 00:12:03.841 Boot Partition: Not Supported 00:12:03.841 Memory Page Size Minimum: 4096 bytes 00:12:03.841 Memory Page Size Maximum: 4096 bytes 00:12:03.841 Persistent Memory Region: Not Supported 00:12:03.841 Optional Asynchronous Events Supported 00:12:03.841 Namespace Attribute Notices: Supported 00:12:03.841 Firmware Activation Notices: Not Supported 00:12:03.841 ANA Change Notices: Not Supported 00:12:03.841 PLE Aggregate Log Change Notices: Not Supported 00:12:03.841 LBA Status Info Alert Notices: Not Supported 00:12:03.841 EGE Aggregate Log Change Notices: Not Supported 00:12:03.841 Normal NVM Subsystem Shutdown event: Not Supported 00:12:03.841 Zone Descriptor Change Notices: Not Supported 00:12:03.841 Discovery Log Change Notices: Not Supported 00:12:03.841 Controller Attributes 00:12:03.841 128-bit Host Identifier: Supported 00:12:03.841 Non-Operational Permissive Mode: Not Supported 00:12:03.841 NVM Sets: Not Supported 00:12:03.841 Read Recovery Levels: Not Supported 00:12:03.841 Endurance Groups: Not Supported 00:12:03.841 Predictable Latency Mode: Not Supported 00:12:03.841 Traffic Based Keep ALive: Not Supported 00:12:03.841 Namespace Granularity: Not Supported 00:12:03.841 SQ Associations: Not Supported 00:12:03.841 UUID List: Not Supported 00:12:03.841 Multi-Domain Subsystem: Not Supported 00:12:03.841 Fixed Capacity Management: Not Supported 00:12:03.841 Variable Capacity Management: Not Supported 00:12:03.841 Delete Endurance Group: Not Supported 00:12:03.841 Delete NVM Set: Not Supported 00:12:03.841 Extended LBA Formats Supported: Not Supported 00:12:03.841 Flexible Data Placement Supported: Not Supported 00:12:03.841 00:12:03.841 Controller Memory Buffer Support 00:12:03.841 ================================ 00:12:03.841 Supported: No 00:12:03.841 00:12:03.841 Persistent Memory Region Support 00:12:03.841 ================================ 00:12:03.841 Supported: No 00:12:03.841 00:12:03.841 Admin Command Set Attributes 00:12:03.841 ============================ 00:12:03.841 Security Send/Receive: Not Supported 00:12:03.841 Format NVM: Not Supported 00:12:03.841 Firmware Activate/Download: Not Supported 00:12:03.841 Namespace Management: Not Supported 00:12:03.841 Device Self-Test: Not Supported 00:12:03.841 Directives: Not Supported 00:12:03.841 NVMe-MI: Not Supported 00:12:03.841 Virtualization Management: Not Supported 00:12:03.841 Doorbell Buffer Config: Not Supported 00:12:03.841 Get LBA Status Capability: Not Supported 00:12:03.841 Command & Feature Lockdown Capability: Not Supported 00:12:03.841 Abort Command Limit: 4 00:12:03.841 Async Event Request Limit: 4 00:12:03.841 Number of Firmware Slots: N/A 00:12:03.841 Firmware Slot 1 Read-Only: N/A 00:12:03.841 Firmware Activation Without Reset: N/A 00:12:03.841 Multiple Update Detection Support: N/A 00:12:03.841 Firmware Update Granularity: No Information Provided 00:12:03.841 Per-Namespace SMART Log: No 00:12:03.841 Asymmetric Namespace Access Log Page: Not Supported 00:12:03.841 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:03.841 Command Effects Log Page: Supported 00:12:03.841 Get Log Page Extended Data: Supported 00:12:03.841 Telemetry Log Pages: Not Supported 00:12:03.841 Persistent Event Log Pages: Not Supported 00:12:03.841 Supported Log Pages Log Page: May Support 00:12:03.841 Commands Supported & Effects Log Page: Not Supported 00:12:03.841 Feature Identifiers & Effects Log Page:May Support 00:12:03.841 NVMe-MI Commands & Effects Log Page: May Support 00:12:03.841 Data Area 4 for Telemetry Log: Not Supported 00:12:03.841 Error Log Page Entries Supported: 128 00:12:03.841 Keep Alive: Supported 00:12:03.841 Keep Alive Granularity: 10000 ms 00:12:03.841 00:12:03.841 NVM Command Set Attributes 00:12:03.841 ========================== 00:12:03.841 Submission Queue Entry Size 00:12:03.841 Max: 64 00:12:03.841 Min: 64 00:12:03.841 Completion Queue Entry Size 00:12:03.841 Max: 16 00:12:03.841 Min: 16 00:12:03.841 Number of Namespaces: 32 00:12:03.841 Compare Command: Supported 00:12:03.841 Write Uncorrectable Command: Not Supported 00:12:03.841 Dataset Management Command: Supported 00:12:03.841 Write Zeroes Command: Supported 00:12:03.841 Set Features Save Field: Not Supported 00:12:03.841 Reservations: Not Supported 00:12:03.841 Timestamp: Not Supported 00:12:03.841 Copy: Supported 00:12:03.841 Volatile Write Cache: Present 00:12:03.841 Atomic Write Unit (Normal): 1 00:12:03.841 Atomic Write Unit (PFail): 1 00:12:03.841 Atomic Compare & Write Unit: 1 00:12:03.841 Fused Compare & Write: Supported 00:12:03.841 Scatter-Gather List 00:12:03.841 SGL Command Set: Supported (Dword aligned) 00:12:03.841 SGL Keyed: Not Supported 00:12:03.841 SGL Bit Bucket Descriptor: Not Supported 00:12:03.841 SGL Metadata Pointer: Not Supported 00:12:03.841 Oversized SGL: Not Supported 00:12:03.841 SGL Metadata Address: Not Supported 00:12:03.841 SGL Offset: Not Supported 00:12:03.841 Transport SGL Data Block: Not Supported 00:12:03.841 Replay Protected Memory Block: Not Supported 00:12:03.841 00:12:03.841 Firmware Slot Information 00:12:03.841 ========================= 00:12:03.841 Active slot: 1 00:12:03.841 Slot 1 Firmware Revision: 24.05 00:12:03.841 00:12:03.841 00:12:03.841 Commands Supported and Effects 00:12:03.841 ============================== 00:12:03.841 Admin Commands 00:12:03.841 -------------- 00:12:03.841 Get Log Page (02h): Supported 00:12:03.841 Identify (06h): Supported 00:12:03.841 Abort (08h): Supported 00:12:03.841 Set Features (09h): Supported 00:12:03.841 Get Features (0Ah): Supported 00:12:03.841 Asynchronous Event Request (0Ch): Supported 00:12:03.841 Keep Alive (18h): Supported 00:12:03.841 I/O Commands 00:12:03.841 ------------ 00:12:03.841 Flush (00h): Supported LBA-Change 00:12:03.841 Write (01h): Supported LBA-Change 00:12:03.841 Read (02h): Supported 00:12:03.841 Compare (05h): Supported 00:12:03.841 Write Zeroes (08h): Supported LBA-Change 00:12:03.841 Dataset Management (09h): Supported LBA-Change 00:12:03.841 Copy (19h): Supported LBA-Change 00:12:03.841 Unknown (79h): Supported LBA-Change 00:12:03.841 Unknown (7Ah): Supported 00:12:03.841 00:12:03.841 Error Log 00:12:03.841 ========= 00:12:03.841 00:12:03.841 Arbitration 00:12:03.841 =========== 00:12:03.841 Arbitration Burst: 1 00:12:03.841 00:12:03.841 Power Management 00:12:03.841 ================ 00:12:03.841 Number of Power States: 1 00:12:03.841 Current Power State: Power State #0 00:12:03.841 Power State #0: 00:12:03.841 Max Power: 0.00 W 00:12:03.841 Non-Operational State: Operational 00:12:03.841 Entry Latency: Not Reported 00:12:03.841 Exit Latency: Not Reported 00:12:03.841 Relative Read Throughput: 0 00:12:03.841 Relative Read Latency: 0 00:12:03.841 Relative Write Throughput: 0 00:12:03.841 Relative Write Latency: 0 00:12:03.841 Idle Power: Not Reported 00:12:03.841 Active Power: Not Reported 00:12:03.841 Non-Operational Permissive Mode: Not Supported 00:12:03.841 00:12:03.841 Health Information 00:12:03.841 ================== 00:12:03.841 Critical Warnings: 00:12:03.841 Available Spare Space: OK 00:12:03.841 Temperature: OK 00:12:03.841 Device Reliability: OK 00:12:03.841 Read Only: No 00:12:03.841 Volatile Memory Backup: OK 00:12:03.841 Current Temperature: 0 Kelvin (-2[2024-05-15 15:50:02.368727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:03.842 [2024-05-15 15:50:02.368736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:03.842 [2024-05-15 15:50:02.368763] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:03.842 [2024-05-15 15:50:02.368773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.842 [2024-05-15 15:50:02.368781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.842 [2024-05-15 15:50:02.368789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.842 [2024-05-15 15:50:02.368796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.842 [2024-05-15 15:50:02.371199] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:03.842 [2024-05-15 15:50:02.371211] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:03.842 [2024-05-15 15:50:02.371794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:03.842 [2024-05-15 15:50:02.371843] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:03.842 [2024-05-15 15:50:02.371851] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:03.842 [2024-05-15 15:50:02.372804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:03.842 [2024-05-15 15:50:02.372816] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:03.842 [2024-05-15 15:50:02.372864] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:03.842 [2024-05-15 15:50:02.373837] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:04.101 73 Celsius) 00:12:04.101 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:04.101 Available Spare: 0% 00:12:04.101 Available Spare Threshold: 0% 00:12:04.101 Life Percentage Used: 0% 00:12:04.101 Data Units Read: 0 00:12:04.101 Data Units Written: 0 00:12:04.102 Host Read Commands: 0 00:12:04.102 Host Write Commands: 0 00:12:04.102 Controller Busy Time: 0 minutes 00:12:04.102 Power Cycles: 0 00:12:04.102 Power On Hours: 0 hours 00:12:04.102 Unsafe Shutdowns: 0 00:12:04.102 Unrecoverable Media Errors: 0 00:12:04.102 Lifetime Error Log Entries: 0 00:12:04.102 Warning Temperature Time: 0 minutes 00:12:04.102 Critical Temperature Time: 0 minutes 00:12:04.102 00:12:04.102 Number of Queues 00:12:04.102 ================ 00:12:04.102 Number of I/O Submission Queues: 127 00:12:04.102 Number of I/O Completion Queues: 127 00:12:04.102 00:12:04.102 Active Namespaces 00:12:04.102 ================= 00:12:04.102 Namespace ID:1 00:12:04.102 Error Recovery Timeout: Unlimited 00:12:04.102 Command Set Identifier: NVM (00h) 00:12:04.102 Deallocate: Supported 00:12:04.102 Deallocated/Unwritten Error: Not Supported 00:12:04.102 Deallocated Read Value: Unknown 00:12:04.102 Deallocate in Write Zeroes: Not Supported 00:12:04.102 Deallocated Guard Field: 0xFFFF 00:12:04.102 Flush: Supported 00:12:04.102 Reservation: Supported 00:12:04.102 Namespace Sharing Capabilities: Multiple Controllers 00:12:04.102 Size (in LBAs): 131072 (0GiB) 00:12:04.102 Capacity (in LBAs): 131072 (0GiB) 00:12:04.102 Utilization (in LBAs): 131072 (0GiB) 00:12:04.102 NGUID: 8B60481DBE794917AA0628DC03A7DB61 00:12:04.102 UUID: 8b60481d-be79-4917-aa06-28dc03a7db61 00:12:04.102 Thin Provisioning: Not Supported 00:12:04.102 Per-NS Atomic Units: Yes 00:12:04.102 Atomic Boundary Size (Normal): 0 00:12:04.102 Atomic Boundary Size (PFail): 0 00:12:04.102 Atomic Boundary Offset: 0 00:12:04.102 Maximum Single Source Range Length: 65535 00:12:04.102 Maximum Copy Length: 65535 00:12:04.102 Maximum Source Range Count: 1 00:12:04.102 NGUID/EUI64 Never Reused: No 00:12:04.102 Namespace Write Protected: No 00:12:04.102 Number of LBA Formats: 1 00:12:04.102 Current LBA Format: LBA Format #00 00:12:04.102 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:04.102 00:12:04.102 15:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:04.102 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.102 [2024-05-15 15:50:02.592979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:09.387 Initializing NVMe Controllers 00:12:09.387 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:09.387 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:09.387 Initialization complete. Launching workers. 00:12:09.387 ======================================================== 00:12:09.387 Latency(us) 00:12:09.387 Device Information : IOPS MiB/s Average min max 00:12:09.387 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39966.39 156.12 3202.89 905.97 7657.26 00:12:09.387 ======================================================== 00:12:09.387 Total : 39966.39 156.12 3202.89 905.97 7657.26 00:12:09.387 00:12:09.387 [2024-05-15 15:50:07.614345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:09.387 15:50:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:09.387 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.387 [2024-05-15 15:50:07.825325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:14.660 Initializing NVMe Controllers 00:12:14.660 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:14.660 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:14.660 Initialization complete. Launching workers. 00:12:14.660 ======================================================== 00:12:14.660 Latency(us) 00:12:14.660 Device Information : IOPS MiB/s Average min max 00:12:14.660 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.03 62.71 7978.45 6981.71 8980.02 00:12:14.660 ======================================================== 00:12:14.660 Total : 16054.03 62.71 7978.45 6981.71 8980.02 00:12:14.660 00:12:14.660 [2024-05-15 15:50:12.869527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:14.660 15:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:14.660 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.660 [2024-05-15 15:50:13.081560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.937 [2024-05-15 15:50:18.165554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.937 Initializing NVMe Controllers 00:12:19.937 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.937 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.937 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:19.937 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:19.937 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:19.937 Initialization complete. Launching workers. 00:12:19.937 Starting thread on core 2 00:12:19.937 Starting thread on core 3 00:12:19.937 Starting thread on core 1 00:12:19.937 15:50:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:19.937 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.937 [2024-05-15 15:50:18.466561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.134 [2024-05-15 15:50:22.100565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.134 Initializing NVMe Controllers 00:12:24.134 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.134 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.134 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:24.134 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:24.134 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:24.134 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:24.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:24.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:24.134 Initialization complete. Launching workers. 00:12:24.134 Starting thread on core 1 with urgent priority queue 00:12:24.134 Starting thread on core 2 with urgent priority queue 00:12:24.134 Starting thread on core 3 with urgent priority queue 00:12:24.134 Starting thread on core 0 with urgent priority queue 00:12:24.134 SPDK bdev Controller (SPDK1 ) core 0: 3239.67 IO/s 30.87 secs/100000 ios 00:12:24.134 SPDK bdev Controller (SPDK1 ) core 1: 3795.67 IO/s 26.35 secs/100000 ios 00:12:24.134 SPDK bdev Controller (SPDK1 ) core 2: 3775.33 IO/s 26.49 secs/100000 ios 00:12:24.134 SPDK bdev Controller (SPDK1 ) core 3: 3990.00 IO/s 25.06 secs/100000 ios 00:12:24.134 ======================================================== 00:12:24.134 00:12:24.134 15:50:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:24.134 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.134 [2024-05-15 15:50:22.394662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.134 Initializing NVMe Controllers 00:12:24.134 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.134 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.134 Namespace ID: 1 size: 0GB 00:12:24.134 Initialization complete. 00:12:24.134 INFO: using host memory buffer for IO 00:12:24.134 Hello world! 00:12:24.134 [2024-05-15 15:50:22.428013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.134 15:50:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:24.134 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.394 [2024-05-15 15:50:22.704605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.331 Initializing NVMe Controllers 00:12:25.331 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.331 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.331 Initialization complete. Launching workers. 00:12:25.331 submit (in ns) avg, min, max = 7800.8, 3038.4, 4002716.0 00:12:25.331 complete (in ns) avg, min, max = 19794.5, 1694.4, 7988261.6 00:12:25.331 00:12:25.331 Submit histogram 00:12:25.331 ================ 00:12:25.331 Range in us Cumulative Count 00:12:25.331 3.034 - 3.046: 0.0059% ( 1) 00:12:25.331 3.059 - 3.072: 0.0118% ( 1) 00:12:25.331 3.072 - 3.085: 0.0355% ( 4) 00:12:25.331 3.085 - 3.098: 0.1303% ( 16) 00:12:25.331 3.098 - 3.110: 0.5984% ( 79) 00:12:25.331 3.110 - 3.123: 1.6945% ( 185) 00:12:25.331 3.123 - 3.136: 3.5253% ( 309) 00:12:25.331 3.136 - 3.149: 6.2981% ( 468) 00:12:25.331 3.149 - 3.162: 9.6516% ( 566) 00:12:25.331 3.162 - 3.174: 13.5146% ( 652) 00:12:25.331 3.174 - 3.187: 17.8457% ( 731) 00:12:25.331 3.187 - 3.200: 22.8285% ( 841) 00:12:25.331 3.200 - 3.213: 28.0720% ( 885) 00:12:25.331 3.213 - 3.226: 33.7362% ( 956) 00:12:25.331 3.226 - 3.238: 39.9692% ( 1052) 00:12:25.331 3.238 - 3.251: 44.9994% ( 849) 00:12:25.331 3.251 - 3.264: 50.1185% ( 864) 00:12:25.331 3.264 - 3.277: 55.4213% ( 895) 00:12:25.331 3.277 - 3.302: 63.5976% ( 1380) 00:12:25.331 3.302 - 3.328: 71.6258% ( 1355) 00:12:25.331 3.328 - 3.354: 78.9371% ( 1234) 00:12:25.331 3.354 - 3.379: 84.6250% ( 960) 00:12:25.331 3.379 - 3.405: 87.2378% ( 441) 00:12:25.331 3.405 - 3.430: 88.1917% ( 161) 00:12:25.331 3.430 - 3.456: 88.9679% ( 131) 00:12:25.331 3.456 - 3.482: 89.8625% ( 151) 00:12:25.331 3.482 - 3.507: 91.1068% ( 210) 00:12:25.331 3.507 - 3.533: 92.7894% ( 284) 00:12:25.331 3.533 - 3.558: 94.3832% ( 269) 00:12:25.331 3.558 - 3.584: 95.8822% ( 253) 00:12:25.331 3.584 - 3.610: 97.2212% ( 226) 00:12:25.331 3.610 - 3.635: 98.1396% ( 155) 00:12:25.331 3.635 - 3.661: 98.7439% ( 102) 00:12:25.331 3.661 - 3.686: 99.1646% ( 71) 00:12:25.331 3.686 - 3.712: 99.3838% ( 37) 00:12:25.332 3.712 - 3.738: 99.5260% ( 24) 00:12:25.332 3.738 - 3.763: 99.5319% ( 1) 00:12:25.332 3.763 - 3.789: 99.5556% ( 4) 00:12:25.332 3.789 - 3.814: 99.5675% ( 2) 00:12:25.332 3.814 - 3.840: 99.5793% ( 2) 00:12:25.332 3.840 - 3.866: 99.5853% ( 1) 00:12:25.332 4.045 - 4.070: 99.5912% ( 1) 00:12:25.332 6.477 - 6.502: 99.6030% ( 2) 00:12:25.332 6.605 - 6.656: 99.6090% ( 1) 00:12:25.332 6.656 - 6.707: 99.6208% ( 2) 00:12:25.332 6.707 - 6.758: 99.6327% ( 2) 00:12:25.332 6.758 - 6.810: 99.6386% ( 1) 00:12:25.332 6.810 - 6.861: 99.6504% ( 2) 00:12:25.332 6.912 - 6.963: 99.6623% ( 2) 00:12:25.332 6.963 - 7.014: 99.6682% ( 1) 00:12:25.332 7.014 - 7.066: 99.6801% ( 2) 00:12:25.332 7.117 - 7.168: 99.6860% ( 1) 00:12:25.332 7.219 - 7.270: 99.6919% ( 1) 00:12:25.332 7.270 - 7.322: 99.6978% ( 1) 00:12:25.332 7.373 - 7.424: 99.7097% ( 2) 00:12:25.332 7.424 - 7.475: 99.7334% ( 4) 00:12:25.332 7.475 - 7.526: 99.7452% ( 2) 00:12:25.332 7.578 - 7.629: 99.7630% ( 3) 00:12:25.332 7.680 - 7.731: 99.7689% ( 1) 00:12:25.332 7.834 - 7.885: 99.7749% ( 1) 00:12:25.332 7.987 - 8.038: 99.7808% ( 1) 00:12:25.332 8.090 - 8.141: 99.7867% ( 1) 00:12:25.332 8.192 - 8.243: 99.7926% ( 1) 00:12:25.332 8.294 - 8.346: 99.8045% ( 2) 00:12:25.332 8.346 - 8.397: 99.8104% ( 1) 00:12:25.332 8.397 - 8.448: 99.8163% ( 1) 00:12:25.332 8.653 - 8.704: 99.8223% ( 1) 00:12:25.332 8.704 - 8.755: 99.8282% ( 1) 00:12:25.332 8.806 - 8.858: 99.8341% ( 1) 00:12:25.332 8.909 - 8.960: 99.8400% ( 1) 00:12:25.332 8.960 - 9.011: 99.8460% ( 1) 00:12:25.332 9.882 - 9.933: 99.8519% ( 1) 00:12:25.332 9.984 - 10.035: 99.8578% ( 1) 00:12:25.332 10.189 - 10.240: 99.8637% ( 1) 00:12:25.332 10.547 - 10.598: 99.8697% ( 1) 00:12:25.332 13.414 - 13.517: 99.8756% ( 1) 00:12:25.332 13.517 - 13.619: 99.8815% ( 1) 00:12:25.332 16.998 - 17.101: 99.8874% ( 1) 00:12:25.332 3984.589 - 4010.803: 100.0000% ( 19) 00:12:25.332 00:12:25.332 [2024-05-15 15:50:23.720455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.332 Complete histogram 00:12:25.332 ================== 00:12:25.332 Range in us Cumulative Count 00:12:25.332 1.690 - 1.702: 0.0059% ( 1) 00:12:25.332 1.702 - 1.715: 0.0118% ( 1) 00:12:25.332 1.715 - 1.728: 0.0889% ( 13) 00:12:25.332 1.728 - 1.741: 0.8473% ( 128) 00:12:25.332 1.741 - 1.754: 1.8604% ( 171) 00:12:25.332 1.754 - 1.766: 2.5536% ( 117) 00:12:25.332 1.766 - 1.779: 13.6924% ( 1880) 00:12:25.332 1.779 - 1.792: 65.0314% ( 8665) 00:12:25.332 1.792 - 1.805: 85.8514% ( 3514) 00:12:25.332 1.805 - 1.818: 91.7704% ( 999) 00:12:25.332 1.818 - 1.830: 95.5623% ( 640) 00:12:25.332 1.830 - 1.843: 96.7354% ( 198) 00:12:25.332 1.843 - 1.856: 97.8611% ( 190) 00:12:25.332 1.856 - 1.869: 98.7854% ( 156) 00:12:25.332 1.869 - 1.882: 99.0994% ( 53) 00:12:25.332 1.882 - 1.894: 99.1942% ( 16) 00:12:25.332 1.894 - 1.907: 99.2594% ( 11) 00:12:25.332 1.907 - 1.920: 99.2949% ( 6) 00:12:25.332 1.920 - 1.933: 99.3068% ( 2) 00:12:25.332 1.933 - 1.946: 99.3186% ( 2) 00:12:25.332 1.946 - 1.958: 99.3246% ( 1) 00:12:25.332 1.971 - 1.984: 99.3305% ( 1) 00:12:25.332 1.984 - 1.997: 99.3423% ( 2) 00:12:25.332 2.061 - 2.074: 99.3483% ( 1) 00:12:25.332 2.342 - 2.355: 99.3542% ( 1) 00:12:25.332 4.659 - 4.685: 99.3601% ( 1) 00:12:25.332 4.787 - 4.813: 99.3660% ( 1) 00:12:25.332 5.146 - 5.171: 99.3720% ( 1) 00:12:25.332 5.299 - 5.325: 99.3779% ( 1) 00:12:25.332 5.376 - 5.402: 99.3897% ( 2) 00:12:25.332 5.581 - 5.606: 99.3957% ( 1) 00:12:25.332 5.606 - 5.632: 99.4016% ( 1) 00:12:25.332 5.760 - 5.786: 99.4075% ( 1) 00:12:25.332 5.862 - 5.888: 99.4134% ( 1) 00:12:25.332 5.888 - 5.914: 99.4253% ( 2) 00:12:25.332 6.016 - 6.042: 99.4312% ( 1) 00:12:25.332 6.042 - 6.067: 99.4371% ( 1) 00:12:25.332 6.426 - 6.451: 99.4431% ( 1) 00:12:25.332 6.451 - 6.477: 99.4490% ( 1) 00:12:25.332 6.528 - 6.554: 99.4549% ( 1) 00:12:25.332 6.605 - 6.656: 99.4608% ( 1) 00:12:25.332 6.656 - 6.707: 99.4668% ( 1) 00:12:25.332 6.707 - 6.758: 99.4786% ( 2) 00:12:25.332 6.912 - 6.963: 99.4845% ( 1) 00:12:25.332 7.168 - 7.219: 99.4905% ( 1) 00:12:25.332 7.270 - 7.322: 99.4964% ( 1) 00:12:25.332 7.322 - 7.373: 99.5023% ( 1) 00:12:25.332 7.475 - 7.526: 99.5082% ( 1) 00:12:25.332 7.885 - 7.936: 99.5142% ( 1) 00:12:25.332 8.038 - 8.090: 99.5260% ( 2) 00:12:25.332 8.294 - 8.346: 99.5319% ( 1) 00:12:25.332 8.448 - 8.499: 99.5379% ( 1) 00:12:25.332 8.602 - 8.653: 99.5438% ( 1) 00:12:25.332 10.598 - 10.650: 99.5497% ( 1) 00:12:25.332 15.667 - 15.770: 99.5556% ( 1) 00:12:25.332 3984.589 - 4010.803: 99.9941% ( 74) 00:12:25.332 7969.178 - 8021.606: 100.0000% ( 1) 00:12:25.332 00:12:25.332 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:25.332 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:25.332 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:25.332 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:25.332 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:25.593 [ 00:12:25.593 { 00:12:25.593 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:25.593 "subtype": "Discovery", 00:12:25.593 "listen_addresses": [], 00:12:25.593 "allow_any_host": true, 00:12:25.593 "hosts": [] 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:25.593 "subtype": "NVMe", 00:12:25.593 "listen_addresses": [ 00:12:25.593 { 00:12:25.593 "trtype": "VFIOUSER", 00:12:25.593 "adrfam": "IPv4", 00:12:25.593 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:25.593 "trsvcid": "0" 00:12:25.593 } 00:12:25.593 ], 00:12:25.593 "allow_any_host": true, 00:12:25.593 "hosts": [], 00:12:25.593 "serial_number": "SPDK1", 00:12:25.593 "model_number": "SPDK bdev Controller", 00:12:25.593 "max_namespaces": 32, 00:12:25.593 "min_cntlid": 1, 00:12:25.593 "max_cntlid": 65519, 00:12:25.593 "namespaces": [ 00:12:25.593 { 00:12:25.593 "nsid": 1, 00:12:25.593 "bdev_name": "Malloc1", 00:12:25.593 "name": "Malloc1", 00:12:25.593 "nguid": "8B60481DBE794917AA0628DC03A7DB61", 00:12:25.593 "uuid": "8b60481d-be79-4917-aa06-28dc03a7db61" 00:12:25.593 } 00:12:25.593 ] 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:25.593 "subtype": "NVMe", 00:12:25.593 "listen_addresses": [ 00:12:25.593 { 00:12:25.593 "trtype": "VFIOUSER", 00:12:25.593 "adrfam": "IPv4", 00:12:25.593 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:25.593 "trsvcid": "0" 00:12:25.593 } 00:12:25.593 ], 00:12:25.593 "allow_any_host": true, 00:12:25.593 "hosts": [], 00:12:25.593 "serial_number": "SPDK2", 00:12:25.593 "model_number": "SPDK bdev Controller", 00:12:25.593 "max_namespaces": 32, 00:12:25.593 "min_cntlid": 1, 00:12:25.593 "max_cntlid": 65519, 00:12:25.593 "namespaces": [ 00:12:25.593 { 00:12:25.593 "nsid": 1, 00:12:25.593 "bdev_name": "Malloc2", 00:12:25.593 "name": "Malloc2", 00:12:25.593 "nguid": "A8BDEB9620B54E928A5B1B490C6F796B", 00:12:25.593 "uuid": "a8bdeb96-20b5-4e92-8a5b-1b490c6f796b" 00:12:25.593 } 00:12:25.593 ] 00:12:25.593 } 00:12:25.593 ] 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3678625 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:25.593 15:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:25.593 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.593 [2024-05-15 15:50:24.109558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.593 Malloc3 00:12:25.593 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:25.886 [2024-05-15 15:50:24.283841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.886 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:25.886 Asynchronous Event Request test 00:12:25.886 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.886 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.886 Registering asynchronous event callbacks... 00:12:25.886 Starting namespace attribute notice tests for all controllers... 00:12:25.886 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:25.886 aer_cb - Changed Namespace 00:12:25.886 Cleaning up... 00:12:26.147 [ 00:12:26.147 { 00:12:26.147 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:26.147 "subtype": "Discovery", 00:12:26.147 "listen_addresses": [], 00:12:26.147 "allow_any_host": true, 00:12:26.147 "hosts": [] 00:12:26.147 }, 00:12:26.147 { 00:12:26.147 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:26.147 "subtype": "NVMe", 00:12:26.147 "listen_addresses": [ 00:12:26.147 { 00:12:26.147 "trtype": "VFIOUSER", 00:12:26.147 "adrfam": "IPv4", 00:12:26.147 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:26.147 "trsvcid": "0" 00:12:26.147 } 00:12:26.147 ], 00:12:26.147 "allow_any_host": true, 00:12:26.147 "hosts": [], 00:12:26.147 "serial_number": "SPDK1", 00:12:26.147 "model_number": "SPDK bdev Controller", 00:12:26.147 "max_namespaces": 32, 00:12:26.147 "min_cntlid": 1, 00:12:26.147 "max_cntlid": 65519, 00:12:26.147 "namespaces": [ 00:12:26.147 { 00:12:26.147 "nsid": 1, 00:12:26.147 "bdev_name": "Malloc1", 00:12:26.147 "name": "Malloc1", 00:12:26.147 "nguid": "8B60481DBE794917AA0628DC03A7DB61", 00:12:26.147 "uuid": "8b60481d-be79-4917-aa06-28dc03a7db61" 00:12:26.147 }, 00:12:26.147 { 00:12:26.147 "nsid": 2, 00:12:26.147 "bdev_name": "Malloc3", 00:12:26.147 "name": "Malloc3", 00:12:26.147 "nguid": "AB9A42A4576C4A38940F9C30179F12F0", 00:12:26.147 "uuid": "ab9a42a4-576c-4a38-940f-9c30179f12f0" 00:12:26.147 } 00:12:26.147 ] 00:12:26.147 }, 00:12:26.147 { 00:12:26.147 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:26.147 "subtype": "NVMe", 00:12:26.147 "listen_addresses": [ 00:12:26.147 { 00:12:26.147 "trtype": "VFIOUSER", 00:12:26.147 "adrfam": "IPv4", 00:12:26.147 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:26.147 "trsvcid": "0" 00:12:26.147 } 00:12:26.147 ], 00:12:26.147 "allow_any_host": true, 00:12:26.147 "hosts": [], 00:12:26.147 "serial_number": "SPDK2", 00:12:26.147 "model_number": "SPDK bdev Controller", 00:12:26.147 "max_namespaces": 32, 00:12:26.147 "min_cntlid": 1, 00:12:26.147 "max_cntlid": 65519, 00:12:26.147 "namespaces": [ 00:12:26.147 { 00:12:26.147 "nsid": 1, 00:12:26.147 "bdev_name": "Malloc2", 00:12:26.147 "name": "Malloc2", 00:12:26.147 "nguid": "A8BDEB9620B54E928A5B1B490C6F796B", 00:12:26.147 "uuid": "a8bdeb96-20b5-4e92-8a5b-1b490c6f796b" 00:12:26.147 } 00:12:26.147 ] 00:12:26.147 } 00:12:26.147 ] 00:12:26.147 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3678625 00:12:26.147 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:26.147 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:26.147 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:26.147 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:26.147 [2024-05-15 15:50:24.504611] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:26.147 [2024-05-15 15:50:24.504639] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678643 ] 00:12:26.147 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.147 [2024-05-15 15:50:24.534428] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:26.147 [2024-05-15 15:50:24.546267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:26.147 [2024-05-15 15:50:24.546290] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f721ed95000 00:12:26.147 [2024-05-15 15:50:24.547271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.548277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.549280] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.550286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.551290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.552301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.553304] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.554315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:26.147 [2024-05-15 15:50:24.555324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:26.147 [2024-05-15 15:50:24.555342] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f721ed8a000 00:12:26.147 [2024-05-15 15:50:24.556235] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:26.147 [2024-05-15 15:50:24.565442] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:26.147 [2024-05-15 15:50:24.565467] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:26.147 [2024-05-15 15:50:24.570575] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:26.147 [2024-05-15 15:50:24.570617] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:26.147 [2024-05-15 15:50:24.570690] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:26.147 [2024-05-15 15:50:24.570706] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:26.147 [2024-05-15 15:50:24.570713] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:26.147 [2024-05-15 15:50:24.571580] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:26.147 [2024-05-15 15:50:24.571591] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:26.147 [2024-05-15 15:50:24.571600] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:26.147 [2024-05-15 15:50:24.572590] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:26.147 [2024-05-15 15:50:24.572600] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:26.147 [2024-05-15 15:50:24.572609] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:26.147 [2024-05-15 15:50:24.573599] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:26.147 [2024-05-15 15:50:24.573611] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:26.147 [2024-05-15 15:50:24.574603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:26.147 [2024-05-15 15:50:24.574614] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:26.147 [2024-05-15 15:50:24.574621] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:26.147 [2024-05-15 15:50:24.574630] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:26.147 [2024-05-15 15:50:24.574737] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:26.147 [2024-05-15 15:50:24.574743] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:26.147 [2024-05-15 15:50:24.574750] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:26.147 [2024-05-15 15:50:24.575612] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:26.147 [2024-05-15 15:50:24.576611] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:26.147 [2024-05-15 15:50:24.577617] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:26.147 [2024-05-15 15:50:24.578622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.147 [2024-05-15 15:50:24.578664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:26.147 [2024-05-15 15:50:24.579634] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:26.147 [2024-05-15 15:50:24.579645] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:26.147 [2024-05-15 15:50:24.579651] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:26.147 [2024-05-15 15:50:24.579670] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:26.147 [2024-05-15 15:50:24.579679] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:26.147 [2024-05-15 15:50:24.579695] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:26.148 [2024-05-15 15:50:24.579701] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:26.148 [2024-05-15 15:50:24.579715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.587204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.587219] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:26.148 [2024-05-15 15:50:24.587226] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:26.148 [2024-05-15 15:50:24.587231] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:26.148 [2024-05-15 15:50:24.587237] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:26.148 [2024-05-15 15:50:24.587244] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:26.148 [2024-05-15 15:50:24.587250] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:26.148 [2024-05-15 15:50:24.587256] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.587268] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.587281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.595199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.595216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.148 [2024-05-15 15:50:24.595226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.148 [2024-05-15 15:50:24.595237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.148 [2024-05-15 15:50:24.595246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.148 [2024-05-15 15:50:24.595252] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.595260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.595270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.603199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.603209] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:26.148 [2024-05-15 15:50:24.603219] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.603227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.603234] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.603243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.611198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.611244] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.611255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.611263] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:26.148 [2024-05-15 15:50:24.611269] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:26.148 [2024-05-15 15:50:24.611276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.619198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.619213] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:26.148 [2024-05-15 15:50:24.619223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.619232] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.619240] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:26.148 [2024-05-15 15:50:24.619246] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:26.148 [2024-05-15 15:50:24.619253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.627198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.627211] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.627225] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.627233] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:26.148 [2024-05-15 15:50:24.627239] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:26.148 [2024-05-15 15:50:24.627246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.635198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.635213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.635221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.635230] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.635237] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.635243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.635250] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:26.148 [2024-05-15 15:50:24.635255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:26.148 [2024-05-15 15:50:24.635262] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:26.148 [2024-05-15 15:50:24.635282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.643200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.643216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.651200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.651215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.659199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.659213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.667199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.667215] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:26.148 [2024-05-15 15:50:24.667221] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:26.148 [2024-05-15 15:50:24.667226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:26.148 [2024-05-15 15:50:24.667230] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:26.148 [2024-05-15 15:50:24.667237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:26.148 [2024-05-15 15:50:24.667248] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:26.148 [2024-05-15 15:50:24.667254] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:26.148 [2024-05-15 15:50:24.667261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.667268] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:26.148 [2024-05-15 15:50:24.667274] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:26.148 [2024-05-15 15:50:24.667281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.667292] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:26.148 [2024-05-15 15:50:24.667298] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:26.148 [2024-05-15 15:50:24.667304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:26.148 [2024-05-15 15:50:24.675202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.675219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.675230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:26.148 [2024-05-15 15:50:24.675241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:26.148 ===================================================== 00:12:26.148 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:26.148 ===================================================== 00:12:26.148 Controller Capabilities/Features 00:12:26.148 ================================ 00:12:26.148 Vendor ID: 4e58 00:12:26.148 Subsystem Vendor ID: 4e58 00:12:26.148 Serial Number: SPDK2 00:12:26.148 Model Number: SPDK bdev Controller 00:12:26.148 Firmware Version: 24.05 00:12:26.148 Recommended Arb Burst: 6 00:12:26.148 IEEE OUI Identifier: 8d 6b 50 00:12:26.148 Multi-path I/O 00:12:26.148 May have multiple subsystem ports: Yes 00:12:26.148 May have multiple controllers: Yes 00:12:26.148 Associated with SR-IOV VF: No 00:12:26.148 Max Data Transfer Size: 131072 00:12:26.148 Max Number of Namespaces: 32 00:12:26.148 Max Number of I/O Queues: 127 00:12:26.148 NVMe Specification Version (VS): 1.3 00:12:26.148 NVMe Specification Version (Identify): 1.3 00:12:26.148 Maximum Queue Entries: 256 00:12:26.148 Contiguous Queues Required: Yes 00:12:26.148 Arbitration Mechanisms Supported 00:12:26.148 Weighted Round Robin: Not Supported 00:12:26.148 Vendor Specific: Not Supported 00:12:26.148 Reset Timeout: 15000 ms 00:12:26.148 Doorbell Stride: 4 bytes 00:12:26.148 NVM Subsystem Reset: Not Supported 00:12:26.148 Command Sets Supported 00:12:26.148 NVM Command Set: Supported 00:12:26.148 Boot Partition: Not Supported 00:12:26.148 Memory Page Size Minimum: 4096 bytes 00:12:26.148 Memory Page Size Maximum: 4096 bytes 00:12:26.148 Persistent Memory Region: Not Supported 00:12:26.148 Optional Asynchronous Events Supported 00:12:26.148 Namespace Attribute Notices: Supported 00:12:26.148 Firmware Activation Notices: Not Supported 00:12:26.148 ANA Change Notices: Not Supported 00:12:26.148 PLE Aggregate Log Change Notices: Not Supported 00:12:26.148 LBA Status Info Alert Notices: Not Supported 00:12:26.148 EGE Aggregate Log Change Notices: Not Supported 00:12:26.148 Normal NVM Subsystem Shutdown event: Not Supported 00:12:26.148 Zone Descriptor Change Notices: Not Supported 00:12:26.148 Discovery Log Change Notices: Not Supported 00:12:26.148 Controller Attributes 00:12:26.148 128-bit Host Identifier: Supported 00:12:26.148 Non-Operational Permissive Mode: Not Supported 00:12:26.148 NVM Sets: Not Supported 00:12:26.148 Read Recovery Levels: Not Supported 00:12:26.148 Endurance Groups: Not Supported 00:12:26.148 Predictable Latency Mode: Not Supported 00:12:26.148 Traffic Based Keep ALive: Not Supported 00:12:26.148 Namespace Granularity: Not Supported 00:12:26.148 SQ Associations: Not Supported 00:12:26.148 UUID List: Not Supported 00:12:26.148 Multi-Domain Subsystem: Not Supported 00:12:26.148 Fixed Capacity Management: Not Supported 00:12:26.148 Variable Capacity Management: Not Supported 00:12:26.148 Delete Endurance Group: Not Supported 00:12:26.148 Delete NVM Set: Not Supported 00:12:26.148 Extended LBA Formats Supported: Not Supported 00:12:26.148 Flexible Data Placement Supported: Not Supported 00:12:26.148 00:12:26.148 Controller Memory Buffer Support 00:12:26.148 ================================ 00:12:26.148 Supported: No 00:12:26.148 00:12:26.148 Persistent Memory Region Support 00:12:26.148 ================================ 00:12:26.148 Supported: No 00:12:26.148 00:12:26.148 Admin Command Set Attributes 00:12:26.148 ============================ 00:12:26.148 Security Send/Receive: Not Supported 00:12:26.148 Format NVM: Not Supported 00:12:26.148 Firmware Activate/Download: Not Supported 00:12:26.148 Namespace Management: Not Supported 00:12:26.148 Device Self-Test: Not Supported 00:12:26.148 Directives: Not Supported 00:12:26.148 NVMe-MI: Not Supported 00:12:26.148 Virtualization Management: Not Supported 00:12:26.148 Doorbell Buffer Config: Not Supported 00:12:26.148 Get LBA Status Capability: Not Supported 00:12:26.148 Command & Feature Lockdown Capability: Not Supported 00:12:26.148 Abort Command Limit: 4 00:12:26.148 Async Event Request Limit: 4 00:12:26.148 Number of Firmware Slots: N/A 00:12:26.148 Firmware Slot 1 Read-Only: N/A 00:12:26.148 Firmware Activation Without Reset: N/A 00:12:26.148 Multiple Update Detection Support: N/A 00:12:26.148 Firmware Update Granularity: No Information Provided 00:12:26.148 Per-Namespace SMART Log: No 00:12:26.148 Asymmetric Namespace Access Log Page: Not Supported 00:12:26.148 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:26.148 Command Effects Log Page: Supported 00:12:26.148 Get Log Page Extended Data: Supported 00:12:26.148 Telemetry Log Pages: Not Supported 00:12:26.148 Persistent Event Log Pages: Not Supported 00:12:26.148 Supported Log Pages Log Page: May Support 00:12:26.149 Commands Supported & Effects Log Page: Not Supported 00:12:26.149 Feature Identifiers & Effects Log Page:May Support 00:12:26.149 NVMe-MI Commands & Effects Log Page: May Support 00:12:26.149 Data Area 4 for Telemetry Log: Not Supported 00:12:26.149 Error Log Page Entries Supported: 128 00:12:26.149 Keep Alive: Supported 00:12:26.149 Keep Alive Granularity: 10000 ms 00:12:26.149 00:12:26.149 NVM Command Set Attributes 00:12:26.149 ========================== 00:12:26.149 Submission Queue Entry Size 00:12:26.149 Max: 64 00:12:26.149 Min: 64 00:12:26.149 Completion Queue Entry Size 00:12:26.149 Max: 16 00:12:26.149 Min: 16 00:12:26.149 Number of Namespaces: 32 00:12:26.149 Compare Command: Supported 00:12:26.149 Write Uncorrectable Command: Not Supported 00:12:26.149 Dataset Management Command: Supported 00:12:26.149 Write Zeroes Command: Supported 00:12:26.149 Set Features Save Field: Not Supported 00:12:26.149 Reservations: Not Supported 00:12:26.149 Timestamp: Not Supported 00:12:26.149 Copy: Supported 00:12:26.149 Volatile Write Cache: Present 00:12:26.149 Atomic Write Unit (Normal): 1 00:12:26.149 Atomic Write Unit (PFail): 1 00:12:26.149 Atomic Compare & Write Unit: 1 00:12:26.149 Fused Compare & Write: Supported 00:12:26.149 Scatter-Gather List 00:12:26.149 SGL Command Set: Supported (Dword aligned) 00:12:26.149 SGL Keyed: Not Supported 00:12:26.149 SGL Bit Bucket Descriptor: Not Supported 00:12:26.149 SGL Metadata Pointer: Not Supported 00:12:26.149 Oversized SGL: Not Supported 00:12:26.149 SGL Metadata Address: Not Supported 00:12:26.149 SGL Offset: Not Supported 00:12:26.149 Transport SGL Data Block: Not Supported 00:12:26.149 Replay Protected Memory Block: Not Supported 00:12:26.149 00:12:26.149 Firmware Slot Information 00:12:26.149 ========================= 00:12:26.149 Active slot: 1 00:12:26.149 Slot 1 Firmware Revision: 24.05 00:12:26.149 00:12:26.149 00:12:26.149 Commands Supported and Effects 00:12:26.149 ============================== 00:12:26.149 Admin Commands 00:12:26.149 -------------- 00:12:26.149 Get Log Page (02h): Supported 00:12:26.149 Identify (06h): Supported 00:12:26.149 Abort (08h): Supported 00:12:26.149 Set Features (09h): Supported 00:12:26.149 Get Features (0Ah): Supported 00:12:26.149 Asynchronous Event Request (0Ch): Supported 00:12:26.149 Keep Alive (18h): Supported 00:12:26.149 I/O Commands 00:12:26.149 ------------ 00:12:26.149 Flush (00h): Supported LBA-Change 00:12:26.149 Write (01h): Supported LBA-Change 00:12:26.149 Read (02h): Supported 00:12:26.149 Compare (05h): Supported 00:12:26.149 Write Zeroes (08h): Supported LBA-Change 00:12:26.149 Dataset Management (09h): Supported LBA-Change 00:12:26.149 Copy (19h): Supported LBA-Change 00:12:26.149 Unknown (79h): Supported LBA-Change 00:12:26.149 Unknown (7Ah): Supported 00:12:26.149 00:12:26.149 Error Log 00:12:26.149 ========= 00:12:26.149 00:12:26.149 Arbitration 00:12:26.149 =========== 00:12:26.149 Arbitration Burst: 1 00:12:26.149 00:12:26.149 Power Management 00:12:26.149 ================ 00:12:26.149 Number of Power States: 1 00:12:26.149 Current Power State: Power State #0 00:12:26.149 Power State #0: 00:12:26.149 Max Power: 0.00 W 00:12:26.149 Non-Operational State: Operational 00:12:26.149 Entry Latency: Not Reported 00:12:26.149 Exit Latency: Not Reported 00:12:26.149 Relative Read Throughput: 0 00:12:26.149 Relative Read Latency: 0 00:12:26.149 Relative Write Throughput: 0 00:12:26.149 Relative Write Latency: 0 00:12:26.149 Idle Power: Not Reported 00:12:26.149 Active Power: Not Reported 00:12:26.149 Non-Operational Permissive Mode: Not Supported 00:12:26.149 00:12:26.149 Health Information 00:12:26.149 ================== 00:12:26.149 Critical Warnings: 00:12:26.149 Available Spare Space: OK 00:12:26.149 Temperature: OK 00:12:26.149 Device Reliability: OK 00:12:26.149 Read Only: No 00:12:26.149 Volatile Memory Backup: OK 00:12:26.149 Current Temperature: 0 Kelvin (-2[2024-05-15 15:50:24.675334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:26.149 [2024-05-15 15:50:24.683198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:26.149 [2024-05-15 15:50:24.683228] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:26.149 [2024-05-15 15:50:24.683238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.149 [2024-05-15 15:50:24.683246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.149 [2024-05-15 15:50:24.683254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.149 [2024-05-15 15:50:24.683262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.149 [2024-05-15 15:50:24.683314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:26.149 [2024-05-15 15:50:24.683327] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:26.149 [2024-05-15 15:50:24.684318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:26.149 [2024-05-15 15:50:24.684364] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:26.149 [2024-05-15 15:50:24.684372] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:26.149 [2024-05-15 15:50:24.685323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:26.149 [2024-05-15 15:50:24.685337] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:26.149 [2024-05-15 15:50:24.685389] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:26.149 [2024-05-15 15:50:24.686342] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:26.408 73 Celsius) 00:12:26.408 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:26.408 Available Spare: 0% 00:12:26.408 Available Spare Threshold: 0% 00:12:26.408 Life Percentage Used: 0% 00:12:26.408 Data Units Read: 0 00:12:26.408 Data Units Written: 0 00:12:26.408 Host Read Commands: 0 00:12:26.408 Host Write Commands: 0 00:12:26.408 Controller Busy Time: 0 minutes 00:12:26.408 Power Cycles: 0 00:12:26.409 Power On Hours: 0 hours 00:12:26.409 Unsafe Shutdowns: 0 00:12:26.409 Unrecoverable Media Errors: 0 00:12:26.409 Lifetime Error Log Entries: 0 00:12:26.409 Warning Temperature Time: 0 minutes 00:12:26.409 Critical Temperature Time: 0 minutes 00:12:26.409 00:12:26.409 Number of Queues 00:12:26.409 ================ 00:12:26.409 Number of I/O Submission Queues: 127 00:12:26.409 Number of I/O Completion Queues: 127 00:12:26.409 00:12:26.409 Active Namespaces 00:12:26.409 ================= 00:12:26.409 Namespace ID:1 00:12:26.409 Error Recovery Timeout: Unlimited 00:12:26.409 Command Set Identifier: NVM (00h) 00:12:26.409 Deallocate: Supported 00:12:26.409 Deallocated/Unwritten Error: Not Supported 00:12:26.409 Deallocated Read Value: Unknown 00:12:26.409 Deallocate in Write Zeroes: Not Supported 00:12:26.409 Deallocated Guard Field: 0xFFFF 00:12:26.409 Flush: Supported 00:12:26.409 Reservation: Supported 00:12:26.409 Namespace Sharing Capabilities: Multiple Controllers 00:12:26.409 Size (in LBAs): 131072 (0GiB) 00:12:26.409 Capacity (in LBAs): 131072 (0GiB) 00:12:26.409 Utilization (in LBAs): 131072 (0GiB) 00:12:26.409 NGUID: A8BDEB9620B54E928A5B1B490C6F796B 00:12:26.409 UUID: a8bdeb96-20b5-4e92-8a5b-1b490c6f796b 00:12:26.409 Thin Provisioning: Not Supported 00:12:26.409 Per-NS Atomic Units: Yes 00:12:26.409 Atomic Boundary Size (Normal): 0 00:12:26.409 Atomic Boundary Size (PFail): 0 00:12:26.409 Atomic Boundary Offset: 0 00:12:26.409 Maximum Single Source Range Length: 65535 00:12:26.409 Maximum Copy Length: 65535 00:12:26.409 Maximum Source Range Count: 1 00:12:26.409 NGUID/EUI64 Never Reused: No 00:12:26.409 Namespace Write Protected: No 00:12:26.409 Number of LBA Formats: 1 00:12:26.409 Current LBA Format: LBA Format #00 00:12:26.409 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:26.409 00:12:26.409 15:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:26.409 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.409 [2024-05-15 15:50:24.904202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:31.683 Initializing NVMe Controllers 00:12:31.683 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:31.683 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:31.683 Initialization complete. Launching workers. 00:12:31.683 ======================================================== 00:12:31.683 Latency(us) 00:12:31.683 Device Information : IOPS MiB/s Average min max 00:12:31.683 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39986.20 156.20 3200.92 915.95 6703.94 00:12:31.683 ======================================================== 00:12:31.683 Total : 39986.20 156.20 3200.92 915.95 6703.94 00:12:31.683 00:12:31.683 [2024-05-15 15:50:30.013441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:31.683 15:50:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:31.683 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.683 [2024-05-15 15:50:30.233107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:36.958 Initializing NVMe Controllers 00:12:36.958 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:36.958 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:36.958 Initialization complete. Launching workers. 00:12:36.958 ======================================================== 00:12:36.958 Latency(us) 00:12:36.958 Device Information : IOPS MiB/s Average min max 00:12:36.958 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.43 156.08 3203.23 920.83 7674.70 00:12:36.958 ======================================================== 00:12:36.958 Total : 39957.43 156.08 3203.23 920.83 7674.70 00:12:36.958 00:12:36.958 [2024-05-15 15:50:35.253651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:36.958 15:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:36.958 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.958 [2024-05-15 15:50:35.464732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:42.228 [2024-05-15 15:50:40.609288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:42.228 Initializing NVMe Controllers 00:12:42.228 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:42.228 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:42.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:42.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:42.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:42.228 Initialization complete. Launching workers. 00:12:42.228 Starting thread on core 2 00:12:42.228 Starting thread on core 3 00:12:42.228 Starting thread on core 1 00:12:42.228 15:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:42.228 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.487 [2024-05-15 15:50:40.905566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.775 [2024-05-15 15:50:43.958253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.775 Initializing NVMe Controllers 00:12:45.775 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.775 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.775 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:45.775 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:45.775 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:45.775 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:45.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:45.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:45.775 Initialization complete. Launching workers. 00:12:45.775 Starting thread on core 1 with urgent priority queue 00:12:45.775 Starting thread on core 2 with urgent priority queue 00:12:45.775 Starting thread on core 3 with urgent priority queue 00:12:45.775 Starting thread on core 0 with urgent priority queue 00:12:45.775 SPDK bdev Controller (SPDK2 ) core 0: 9422.00 IO/s 10.61 secs/100000 ios 00:12:45.775 SPDK bdev Controller (SPDK2 ) core 1: 9255.67 IO/s 10.80 secs/100000 ios 00:12:45.775 SPDK bdev Controller (SPDK2 ) core 2: 7706.00 IO/s 12.98 secs/100000 ios 00:12:45.775 SPDK bdev Controller (SPDK2 ) core 3: 9853.67 IO/s 10.15 secs/100000 ios 00:12:45.775 ======================================================== 00:12:45.775 00:12:45.775 15:50:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:45.775 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.775 [2024-05-15 15:50:44.246609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.775 Initializing NVMe Controllers 00:12:45.775 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.775 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.775 Namespace ID: 1 size: 0GB 00:12:45.775 Initialization complete. 00:12:45.775 INFO: using host memory buffer for IO 00:12:45.775 Hello world! 00:12:45.775 [2024-05-15 15:50:44.258696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.775 15:50:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:46.035 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.035 [2024-05-15 15:50:44.544432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:47.414 Initializing NVMe Controllers 00:12:47.414 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:47.414 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:47.414 Initialization complete. Launching workers. 00:12:47.414 submit (in ns) avg, min, max = 6793.8, 3080.0, 4001188.0 00:12:47.414 complete (in ns) avg, min, max = 20985.5, 1712.0, 3999524.0 00:12:47.414 00:12:47.414 Submit histogram 00:12:47.414 ================ 00:12:47.414 Range in us Cumulative Count 00:12:47.414 3.072 - 3.085: 0.0176% ( 3) 00:12:47.414 3.085 - 3.098: 0.2695% ( 43) 00:12:47.414 3.098 - 3.110: 1.1250% ( 146) 00:12:47.414 3.110 - 3.123: 2.6544% ( 261) 00:12:47.414 3.123 - 3.136: 5.5256% ( 490) 00:12:47.414 3.136 - 3.149: 9.2113% ( 629) 00:12:47.414 3.149 - 3.162: 14.2154% ( 854) 00:12:47.414 3.162 - 3.174: 20.4324% ( 1061) 00:12:47.414 3.174 - 3.187: 26.2159% ( 987) 00:12:47.414 3.187 - 3.200: 32.4681% ( 1067) 00:12:47.414 3.200 - 3.213: 38.4449% ( 1020) 00:12:47.414 3.213 - 3.226: 45.0193% ( 1122) 00:12:47.414 3.226 - 3.238: 51.4180% ( 1092) 00:12:47.414 3.238 - 3.251: 55.6545% ( 723) 00:12:47.414 3.251 - 3.264: 58.6019% ( 503) 00:12:47.414 3.264 - 3.277: 61.7485% ( 537) 00:12:47.414 3.277 - 3.302: 67.2976% ( 947) 00:12:47.414 3.302 - 3.328: 72.7997% ( 939) 00:12:47.414 3.328 - 3.354: 80.3410% ( 1287) 00:12:47.414 3.354 - 3.379: 86.1069% ( 984) 00:12:47.414 3.379 - 3.405: 87.8648% ( 300) 00:12:47.414 3.405 - 3.430: 88.5210% ( 112) 00:12:47.414 3.430 - 3.456: 89.3179% ( 136) 00:12:47.414 3.456 - 3.482: 90.6832% ( 233) 00:12:47.414 3.482 - 3.507: 92.3356% ( 282) 00:12:47.414 3.507 - 3.533: 94.0408% ( 291) 00:12:47.414 3.533 - 3.558: 95.4178% ( 235) 00:12:47.414 3.558 - 3.584: 96.4608% ( 178) 00:12:47.414 3.584 - 3.610: 97.5800% ( 191) 00:12:47.414 3.610 - 3.635: 98.4472% ( 148) 00:12:47.414 3.635 - 3.661: 99.0390% ( 101) 00:12:47.414 3.661 - 3.686: 99.3203% ( 48) 00:12:47.414 3.686 - 3.712: 99.5019% ( 31) 00:12:47.414 3.712 - 3.738: 99.5898% ( 15) 00:12:47.414 3.738 - 3.763: 99.6308% ( 7) 00:12:47.414 3.763 - 3.789: 99.6367% ( 1) 00:12:47.414 3.789 - 3.814: 99.6484% ( 2) 00:12:47.414 3.814 - 3.840: 99.6543% ( 1) 00:12:47.414 3.840 - 3.866: 99.6601% ( 1) 00:12:47.414 3.891 - 3.917: 99.6719% ( 2) 00:12:47.414 3.917 - 3.942: 99.6777% ( 1) 00:12:47.414 5.478 - 5.504: 99.6836% ( 1) 00:12:47.414 5.530 - 5.555: 99.6894% ( 1) 00:12:47.414 5.606 - 5.632: 99.6953% ( 1) 00:12:47.414 5.709 - 5.734: 99.7070% ( 2) 00:12:47.414 5.786 - 5.811: 99.7129% ( 1) 00:12:47.414 5.862 - 5.888: 99.7187% ( 1) 00:12:47.414 5.888 - 5.914: 99.7246% ( 1) 00:12:47.414 6.426 - 6.451: 99.7305% ( 1) 00:12:47.414 6.707 - 6.758: 99.7363% ( 1) 00:12:47.414 6.810 - 6.861: 99.7422% ( 1) 00:12:47.414 6.861 - 6.912: 99.7480% ( 1) 00:12:47.414 7.014 - 7.066: 99.7598% ( 2) 00:12:47.414 7.117 - 7.168: 99.7656% ( 1) 00:12:47.414 7.168 - 7.219: 99.7773% ( 2) 00:12:47.414 7.219 - 7.270: 99.7949% ( 3) 00:12:47.414 7.270 - 7.322: 99.8008% ( 1) 00:12:47.414 7.322 - 7.373: 99.8066% ( 1) 00:12:47.414 7.373 - 7.424: 99.8125% ( 1) 00:12:47.414 7.424 - 7.475: 99.8184% ( 1) 00:12:47.414 7.475 - 7.526: 99.8242% ( 1) 00:12:47.414 7.578 - 7.629: 99.8301% ( 1) 00:12:47.414 7.680 - 7.731: 99.8359% ( 1) 00:12:47.414 7.731 - 7.782: 99.8418% ( 1) 00:12:47.414 7.782 - 7.834: 99.8477% ( 1) 00:12:47.414 7.834 - 7.885: 99.8594% ( 2) 00:12:47.414 7.885 - 7.936: 99.8652% ( 1) 00:12:47.414 8.038 - 8.090: 99.8711% ( 1) 00:12:47.414 8.141 - 8.192: 99.8769% ( 1) 00:12:47.414 8.346 - 8.397: 99.8828% ( 1) 00:12:47.414 8.397 - 8.448: 99.8887% ( 1) 00:12:47.414 8.550 - 8.602: 99.8945% ( 1) 00:12:47.415 8.960 - 9.011: 99.9004% ( 1) 00:12:47.415 10.394 - 10.445: 99.9062% ( 1) 00:12:47.415 15.258 - 15.360: 99.9121% ( 1) 00:12:47.415 3984.589 - 4010.803: 100.0000% ( 15) 00:12:47.415 00:12:47.415 Complete histogram 00:12:47.415 ================== 00:12:47.415 Range in us Cumulative Count 00:12:47.415 1.702 - 1.715: 0.0176% ( 3) 00:12:47.415 1.715 - [2024-05-15 15:50:45.636031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.415 1.728: 0.3750% ( 61) 00:12:47.415 1.728 - 1.741: 1.6876% ( 224) 00:12:47.415 1.741 - 1.754: 2.5548% ( 148) 00:12:47.415 1.754 - 1.766: 4.2072% ( 282) 00:12:47.415 1.766 - 1.779: 37.0503% ( 5605) 00:12:47.415 1.779 - 1.792: 79.7023% ( 7279) 00:12:47.415 1.792 - 1.805: 90.2320% ( 1797) 00:12:47.415 1.805 - 1.818: 95.6932% ( 932) 00:12:47.415 1.818 - 1.830: 97.2636% ( 268) 00:12:47.415 1.830 - 1.843: 97.8964% ( 108) 00:12:47.415 1.843 - 1.856: 98.5644% ( 114) 00:12:47.415 1.856 - 1.869: 99.0390% ( 81) 00:12:47.415 1.869 - 1.882: 99.1621% ( 21) 00:12:47.415 1.882 - 1.894: 99.2031% ( 7) 00:12:47.415 1.894 - 1.907: 99.2500% ( 8) 00:12:47.415 1.907 - 1.920: 99.2734% ( 4) 00:12:47.415 1.920 - 1.933: 99.2851% ( 2) 00:12:47.415 1.946 - 1.958: 99.2910% ( 1) 00:12:47.415 1.958 - 1.971: 99.3027% ( 2) 00:12:47.415 2.074 - 2.086: 99.3086% ( 1) 00:12:47.415 2.125 - 2.138: 99.3144% ( 1) 00:12:47.415 2.202 - 2.214: 99.3203% ( 1) 00:12:47.415 2.278 - 2.291: 99.3261% ( 1) 00:12:47.415 2.496 - 2.509: 99.3320% ( 1) 00:12:47.415 4.122 - 4.147: 99.3379% ( 1) 00:12:47.415 4.198 - 4.224: 99.3437% ( 1) 00:12:47.415 4.762 - 4.787: 99.3496% ( 1) 00:12:47.415 4.941 - 4.966: 99.3554% ( 1) 00:12:47.415 5.146 - 5.171: 99.3613% ( 1) 00:12:47.415 5.222 - 5.248: 99.3672% ( 1) 00:12:47.415 5.350 - 5.376: 99.3730% ( 1) 00:12:47.415 5.530 - 5.555: 99.3847% ( 2) 00:12:47.415 5.555 - 5.581: 99.3906% ( 1) 00:12:47.415 5.658 - 5.683: 99.3965% ( 1) 00:12:47.415 5.683 - 5.709: 99.4082% ( 2) 00:12:47.415 5.734 - 5.760: 99.4140% ( 1) 00:12:47.415 5.939 - 5.965: 99.4199% ( 1) 00:12:47.415 6.118 - 6.144: 99.4316% ( 2) 00:12:47.415 6.144 - 6.170: 99.4433% ( 2) 00:12:47.415 6.246 - 6.272: 99.4492% ( 1) 00:12:47.415 6.298 - 6.323: 99.4551% ( 1) 00:12:47.415 6.323 - 6.349: 99.4609% ( 1) 00:12:47.415 6.374 - 6.400: 99.4668% ( 1) 00:12:47.415 6.656 - 6.707: 99.4726% ( 1) 00:12:47.415 6.810 - 6.861: 99.4844% ( 2) 00:12:47.415 6.912 - 6.963: 99.4902% ( 1) 00:12:47.415 7.066 - 7.117: 99.4961% ( 1) 00:12:47.415 7.168 - 7.219: 99.5019% ( 1) 00:12:47.415 7.782 - 7.834: 99.5078% ( 1) 00:12:47.415 8.397 - 8.448: 99.5137% ( 1) 00:12:47.415 12.083 - 12.134: 99.5195% ( 1) 00:12:47.415 3905.946 - 3932.160: 99.5254% ( 1) 00:12:47.415 3984.589 - 4010.803: 100.0000% ( 81) 00:12:47.415 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:47.415 [ 00:12:47.415 { 00:12:47.415 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.415 "subtype": "Discovery", 00:12:47.415 "listen_addresses": [], 00:12:47.415 "allow_any_host": true, 00:12:47.415 "hosts": [] 00:12:47.415 }, 00:12:47.415 { 00:12:47.415 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:47.415 "subtype": "NVMe", 00:12:47.415 "listen_addresses": [ 00:12:47.415 { 00:12:47.415 "trtype": "VFIOUSER", 00:12:47.415 "adrfam": "IPv4", 00:12:47.415 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:47.415 "trsvcid": "0" 00:12:47.415 } 00:12:47.415 ], 00:12:47.415 "allow_any_host": true, 00:12:47.415 "hosts": [], 00:12:47.415 "serial_number": "SPDK1", 00:12:47.415 "model_number": "SPDK bdev Controller", 00:12:47.415 "max_namespaces": 32, 00:12:47.415 "min_cntlid": 1, 00:12:47.415 "max_cntlid": 65519, 00:12:47.415 "namespaces": [ 00:12:47.415 { 00:12:47.415 "nsid": 1, 00:12:47.415 "bdev_name": "Malloc1", 00:12:47.415 "name": "Malloc1", 00:12:47.415 "nguid": "8B60481DBE794917AA0628DC03A7DB61", 00:12:47.415 "uuid": "8b60481d-be79-4917-aa06-28dc03a7db61" 00:12:47.415 }, 00:12:47.415 { 00:12:47.415 "nsid": 2, 00:12:47.415 "bdev_name": "Malloc3", 00:12:47.415 "name": "Malloc3", 00:12:47.415 "nguid": "AB9A42A4576C4A38940F9C30179F12F0", 00:12:47.415 "uuid": "ab9a42a4-576c-4a38-940f-9c30179f12f0" 00:12:47.415 } 00:12:47.415 ] 00:12:47.415 }, 00:12:47.415 { 00:12:47.415 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:47.415 "subtype": "NVMe", 00:12:47.415 "listen_addresses": [ 00:12:47.415 { 00:12:47.415 "trtype": "VFIOUSER", 00:12:47.415 "adrfam": "IPv4", 00:12:47.415 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:47.415 "trsvcid": "0" 00:12:47.415 } 00:12:47.415 ], 00:12:47.415 "allow_any_host": true, 00:12:47.415 "hosts": [], 00:12:47.415 "serial_number": "SPDK2", 00:12:47.415 "model_number": "SPDK bdev Controller", 00:12:47.415 "max_namespaces": 32, 00:12:47.415 "min_cntlid": 1, 00:12:47.415 "max_cntlid": 65519, 00:12:47.415 "namespaces": [ 00:12:47.415 { 00:12:47.415 "nsid": 1, 00:12:47.415 "bdev_name": "Malloc2", 00:12:47.415 "name": "Malloc2", 00:12:47.415 "nguid": "A8BDEB9620B54E928A5B1B490C6F796B", 00:12:47.415 "uuid": "a8bdeb96-20b5-4e92-8a5b-1b490c6f796b" 00:12:47.415 } 00:12:47.415 ] 00:12:47.415 } 00:12:47.415 ] 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3682368 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:47.415 15:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:47.415 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.674 [2024-05-15 15:50:46.016711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:47.674 Malloc4 00:12:47.674 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:47.675 [2024-05-15 15:50:46.226198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:47.934 Asynchronous Event Request test 00:12:47.934 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:47.934 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:47.934 Registering asynchronous event callbacks... 00:12:47.934 Starting namespace attribute notice tests for all controllers... 00:12:47.934 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:47.934 aer_cb - Changed Namespace 00:12:47.934 Cleaning up... 00:12:47.934 [ 00:12:47.934 { 00:12:47.934 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.934 "subtype": "Discovery", 00:12:47.934 "listen_addresses": [], 00:12:47.934 "allow_any_host": true, 00:12:47.934 "hosts": [] 00:12:47.934 }, 00:12:47.934 { 00:12:47.934 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:47.934 "subtype": "NVMe", 00:12:47.934 "listen_addresses": [ 00:12:47.934 { 00:12:47.934 "trtype": "VFIOUSER", 00:12:47.934 "adrfam": "IPv4", 00:12:47.934 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:47.934 "trsvcid": "0" 00:12:47.934 } 00:12:47.934 ], 00:12:47.934 "allow_any_host": true, 00:12:47.934 "hosts": [], 00:12:47.934 "serial_number": "SPDK1", 00:12:47.934 "model_number": "SPDK bdev Controller", 00:12:47.934 "max_namespaces": 32, 00:12:47.934 "min_cntlid": 1, 00:12:47.934 "max_cntlid": 65519, 00:12:47.934 "namespaces": [ 00:12:47.934 { 00:12:47.934 "nsid": 1, 00:12:47.934 "bdev_name": "Malloc1", 00:12:47.934 "name": "Malloc1", 00:12:47.934 "nguid": "8B60481DBE794917AA0628DC03A7DB61", 00:12:47.934 "uuid": "8b60481d-be79-4917-aa06-28dc03a7db61" 00:12:47.934 }, 00:12:47.934 { 00:12:47.934 "nsid": 2, 00:12:47.934 "bdev_name": "Malloc3", 00:12:47.934 "name": "Malloc3", 00:12:47.934 "nguid": "AB9A42A4576C4A38940F9C30179F12F0", 00:12:47.934 "uuid": "ab9a42a4-576c-4a38-940f-9c30179f12f0" 00:12:47.934 } 00:12:47.934 ] 00:12:47.934 }, 00:12:47.934 { 00:12:47.934 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:47.934 "subtype": "NVMe", 00:12:47.934 "listen_addresses": [ 00:12:47.934 { 00:12:47.934 "trtype": "VFIOUSER", 00:12:47.934 "adrfam": "IPv4", 00:12:47.934 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:47.934 "trsvcid": "0" 00:12:47.934 } 00:12:47.934 ], 00:12:47.934 "allow_any_host": true, 00:12:47.934 "hosts": [], 00:12:47.934 "serial_number": "SPDK2", 00:12:47.934 "model_number": "SPDK bdev Controller", 00:12:47.934 "max_namespaces": 32, 00:12:47.934 "min_cntlid": 1, 00:12:47.934 "max_cntlid": 65519, 00:12:47.934 "namespaces": [ 00:12:47.934 { 00:12:47.934 "nsid": 1, 00:12:47.934 "bdev_name": "Malloc2", 00:12:47.934 "name": "Malloc2", 00:12:47.934 "nguid": "A8BDEB9620B54E928A5B1B490C6F796B", 00:12:47.934 "uuid": "a8bdeb96-20b5-4e92-8a5b-1b490c6f796b" 00:12:47.934 }, 00:12:47.934 { 00:12:47.934 "nsid": 2, 00:12:47.934 "bdev_name": "Malloc4", 00:12:47.934 "name": "Malloc4", 00:12:47.934 "nguid": "86F7FC558CCD4F9E8D21FDBA40088288", 00:12:47.934 "uuid": "86f7fc55-8ccd-4f9e-8d21-fdba40088288" 00:12:47.934 } 00:12:47.934 ] 00:12:47.934 } 00:12:47.934 ] 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3682368 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3674333 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3674333 ']' 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3674333 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3674333 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3674333' 00:12:47.934 killing process with pid 3674333 00:12:47.934 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3674333 00:12:47.934 [2024-05-15 15:50:46.471067] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:47.935 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3674333 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3682528 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3682528' 00:12:48.194 Process pid: 3682528 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:48.194 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:48.454 15:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3682528 00:12:48.454 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3682528 ']' 00:12:48.454 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.454 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:48.454 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.454 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:48.454 15:50:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:48.454 [2024-05-15 15:50:46.805552] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:48.454 [2024-05-15 15:50:46.806433] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:48.454 [2024-05-15 15:50:46.806474] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.454 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.454 [2024-05-15 15:50:46.876323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.454 [2024-05-15 15:50:46.945449] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.454 [2024-05-15 15:50:46.945492] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.454 [2024-05-15 15:50:46.945502] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.454 [2024-05-15 15:50:46.945510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.454 [2024-05-15 15:50:46.945517] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.454 [2024-05-15 15:50:46.945570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.454 [2024-05-15 15:50:46.945666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.454 [2024-05-15 15:50:46.945749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.454 [2024-05-15 15:50:46.945751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.713 [2024-05-15 15:50:47.020952] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:48.713 [2024-05-15 15:50:47.021105] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:48.713 [2024-05-15 15:50:47.021337] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:48.713 [2024-05-15 15:50:47.021688] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:48.713 [2024-05-15 15:50:47.021958] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:49.281 15:50:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:49.281 15:50:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:49.281 15:50:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:50.220 15:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:50.543 15:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:50.543 15:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:50.543 15:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:50.543 15:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:50.543 15:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:50.543 Malloc1 00:12:50.543 15:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:50.803 15:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:50.803 15:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:51.062 [2024-05-15 15:50:49.482163] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:51.062 15:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:51.062 15:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:51.062 15:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:51.322 Malloc2 00:12:51.322 15:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:51.581 15:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:51.581 15:50:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3682528 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3682528 ']' 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3682528 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3682528 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3682528' 00:12:51.839 killing process with pid 3682528 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3682528 00:12:51.839 [2024-05-15 15:50:50.307420] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:51.839 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3682528 00:12:52.098 15:50:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:52.098 15:50:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:52.098 00:12:52.098 real 0m52.001s 00:12:52.098 user 3m24.485s 00:12:52.098 sys 0m4.782s 00:12:52.098 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.098 15:50:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:52.098 ************************************ 00:12:52.098 END TEST nvmf_vfio_user 00:12:52.098 ************************************ 00:12:52.098 15:50:50 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:52.098 15:50:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.098 15:50:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.098 15:50:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.098 ************************************ 00:12:52.098 START TEST nvmf_vfio_user_nvme_compliance 00:12:52.098 ************************************ 00:12:52.098 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:52.358 * Looking for test storage... 00:12:52.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3683251 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3683251' 00:12:52.358 Process pid: 3683251 00:12:52.358 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3683251 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3683251 ']' 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.359 15:50:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.359 [2024-05-15 15:50:50.846486] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:52.359 [2024-05-15 15:50:50.846538] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.359 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.359 [2024-05-15 15:50:50.917313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.618 [2024-05-15 15:50:50.990756] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.618 [2024-05-15 15:50:50.990796] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.618 [2024-05-15 15:50:50.990807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.618 [2024-05-15 15:50:50.990815] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.618 [2024-05-15 15:50:50.990822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.618 [2024-05-15 15:50:50.990870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.618 [2024-05-15 15:50:50.990965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.618 [2024-05-15 15:50:50.990967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.188 15:50:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.188 15:50:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:12:53.188 15:50:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.128 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.387 malloc0 00:12:54.387 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.387 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:54.387 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.387 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.387 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 [2024-05-15 15:50:52.721229] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.388 15:50:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:54.388 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.388 00:12:54.388 00:12:54.388 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.388 http://cunit.sourceforge.net/ 00:12:54.388 00:12:54.388 00:12:54.388 Suite: nvme_compliance 00:12:54.388 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 15:50:52.887468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.388 [2024-05-15 15:50:52.888813] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:54.388 [2024-05-15 15:50:52.888831] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:54.388 [2024-05-15 15:50:52.888839] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:54.388 [2024-05-15 15:50:52.890482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.388 passed 00:12:54.647 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 15:50:52.967023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.647 [2024-05-15 15:50:52.973058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.647 passed 00:12:54.647 Test: admin_identify_ns ...[2024-05-15 15:50:53.051295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.647 [2024-05-15 15:50:53.113204] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:54.647 [2024-05-15 15:50:53.121202] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:54.647 [2024-05-15 15:50:53.142301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.647 passed 00:12:54.906 Test: admin_get_features_mandatory_features ...[2024-05-15 15:50:53.213553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.906 [2024-05-15 15:50:53.217583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.906 passed 00:12:54.906 Test: admin_get_features_optional_features ...[2024-05-15 15:50:53.292058] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.906 [2024-05-15 15:50:53.295081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.906 passed 00:12:54.906 Test: admin_set_features_number_of_queues ...[2024-05-15 15:50:53.369502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.165 [2024-05-15 15:50:53.473281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.165 passed 00:12:55.165 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 15:50:53.547461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.165 [2024-05-15 15:50:53.550481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.165 passed 00:12:55.165 Test: admin_get_log_page_with_lpo ...[2024-05-15 15:50:53.624926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.165 [2024-05-15 15:50:53.692202] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:55.165 [2024-05-15 15:50:53.705411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.424 passed 00:12:55.424 Test: fabric_property_get ...[2024-05-15 15:50:53.779492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.424 [2024-05-15 15:50:53.780710] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:55.424 [2024-05-15 15:50:53.782509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.424 passed 00:12:55.424 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 15:50:53.857994] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.424 [2024-05-15 15:50:53.859225] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:55.424 [2024-05-15 15:50:53.861023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.424 passed 00:12:55.424 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 15:50:53.937509] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.684 [2024-05-15 15:50:54.021208] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.684 [2024-05-15 15:50:54.036213] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.684 [2024-05-15 15:50:54.041292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.684 passed 00:12:55.684 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 15:50:54.112622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.684 [2024-05-15 15:50:54.116414] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:55.684 [2024-05-15 15:50:54.117648] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.684 passed 00:12:55.684 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 15:50:54.192060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.943 [2024-05-15 15:50:54.268198] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:55.943 [2024-05-15 15:50:54.292197] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.943 [2024-05-15 15:50:54.297283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.943 passed 00:12:55.943 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 15:50:54.368611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.943 [2024-05-15 15:50:54.369827] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:55.943 [2024-05-15 15:50:54.369852] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:55.943 [2024-05-15 15:50:54.371623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.943 passed 00:12:55.943 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 15:50:54.448098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.203 [2024-05-15 15:50:54.540199] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:56.203 [2024-05-15 15:50:54.548198] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:56.203 [2024-05-15 15:50:54.556200] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:56.203 [2024-05-15 15:50:54.564207] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:56.203 [2024-05-15 15:50:54.593284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.203 passed 00:12:56.203 Test: admin_create_io_sq_verify_pc ...[2024-05-15 15:50:54.664586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.203 [2024-05-15 15:50:54.684207] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:56.203 [2024-05-15 15:50:54.701764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.203 passed 00:12:56.462 Test: admin_create_io_qp_max_qps ...[2024-05-15 15:50:54.774271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.400 [2024-05-15 15:50:55.870202] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:57.969 [2024-05-15 15:50:56.250027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.969 passed 00:12:57.969 Test: admin_create_io_sq_shared_cq ...[2024-05-15 15:50:56.325330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.969 [2024-05-15 15:50:56.458197] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:57.969 [2024-05-15 15:50:56.495263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.969 passed 00:12:57.969 00:12:57.969 Run Summary: Type Total Ran Passed Failed Inactive 00:12:57.969 suites 1 1 n/a 0 0 00:12:57.969 tests 18 18 18 0 0 00:12:57.969 asserts 360 360 360 0 n/a 00:12:57.969 00:12:57.969 Elapsed time = 1.483 seconds 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3683251 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3683251 ']' 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3683251 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3683251 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3683251' 00:12:58.229 killing process with pid 3683251 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3683251 00:12:58.229 [2024-05-15 15:50:56.595359] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:58.229 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3683251 00:12:58.489 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:58.489 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:58.489 00:12:58.489 real 0m6.171s 00:12:58.489 user 0m17.326s 00:12:58.489 sys 0m0.709s 00:12:58.489 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.489 15:50:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:58.489 ************************************ 00:12:58.489 END TEST nvmf_vfio_user_nvme_compliance 00:12:58.489 ************************************ 00:12:58.490 15:50:56 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:58.490 15:50:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:58.490 15:50:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.490 15:50:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.490 ************************************ 00:12:58.490 START TEST nvmf_vfio_user_fuzz 00:12:58.490 ************************************ 00:12:58.490 15:50:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:58.490 * Looking for test storage... 00:12:58.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.490 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3684379 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3684379' 00:12:58.749 Process pid: 3684379 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3684379 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3684379 ']' 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:58.749 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:59.688 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:12:59.688 15:50:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.625 malloc0 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:00.625 15:50:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:32.770 Fuzzing completed. Shutting down the fuzz application 00:13:32.770 00:13:32.770 Dumping successful admin opcodes: 00:13:32.770 8, 9, 10, 24, 00:13:32.770 Dumping successful io opcodes: 00:13:32.770 0, 00:13:32.770 NS: 0x200003a1ef00 I/O qp, Total commands completed: 936348, total successful commands: 3662, random_seed: 2787209536 00:13:32.770 NS: 0x200003a1ef00 admin qp, Total commands completed: 229679, total successful commands: 1838, random_seed: 2658343424 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3684379 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3684379 ']' 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3684379 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3684379 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3684379' 00:13:32.770 killing process with pid 3684379 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3684379 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3684379 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:32.770 00:13:32.770 real 0m32.872s 00:13:32.770 user 0m31.651s 00:13:32.770 sys 0m29.571s 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.770 15:51:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.770 ************************************ 00:13:32.770 END TEST nvmf_vfio_user_fuzz 00:13:32.770 ************************************ 00:13:32.770 15:51:29 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:32.770 15:51:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:32.770 15:51:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.770 15:51:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.770 ************************************ 00:13:32.770 START TEST nvmf_host_management 00:13:32.770 ************************************ 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:32.770 * Looking for test storage... 00:13:32.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.770 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.771 15:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.771 15:51:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.346 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:39.347 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:39.347 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:39.347 Found net devices under 0000:af:00.0: cvl_0_0 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:39.347 Found net devices under 0000:af:00.1: cvl_0_1 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:39.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:13:39.347 00:13:39.347 --- 10.0.0.2 ping statistics --- 00:13:39.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.347 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:39.347 00:13:39.347 --- 10.0.0.1 ping statistics --- 00:13:39.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.347 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.347 15:51:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3693897 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3693897 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3693897 ']' 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.348 [2024-05-15 15:51:37.092657] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:39.348 [2024-05-15 15:51:37.092703] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.348 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.348 [2024-05-15 15:51:37.166953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.348 [2024-05-15 15:51:37.236390] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.348 [2024-05-15 15:51:37.236432] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.348 [2024-05-15 15:51:37.236442] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.348 [2024-05-15 15:51:37.236450] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.348 [2024-05-15 15:51:37.236473] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.348 [2024-05-15 15:51:37.236577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.348 [2024-05-15 15:51:37.236649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.348 [2024-05-15 15:51:37.236738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.348 [2024-05-15 15:51:37.236740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.348 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 [2024-05-15 15:51:37.939069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.608 15:51:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 Malloc0 00:13:39.608 [2024-05-15 15:51:38.005447] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:39.608 [2024-05-15 15:51:38.005725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3694058 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3694058 /var/tmp/bdevperf.sock 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3694058 ']' 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:39.608 { 00:13:39.608 "params": { 00:13:39.608 "name": "Nvme$subsystem", 00:13:39.608 "trtype": "$TEST_TRANSPORT", 00:13:39.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:39.608 "adrfam": "ipv4", 00:13:39.608 "trsvcid": "$NVMF_PORT", 00:13:39.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:39.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:39.608 "hdgst": ${hdgst:-false}, 00:13:39.608 "ddgst": ${ddgst:-false} 00:13:39.608 }, 00:13:39.608 "method": "bdev_nvme_attach_controller" 00:13:39.608 } 00:13:39.608 EOF 00:13:39.608 )") 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:39.608 15:51:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:39.608 "params": { 00:13:39.608 "name": "Nvme0", 00:13:39.608 "trtype": "tcp", 00:13:39.608 "traddr": "10.0.0.2", 00:13:39.608 "adrfam": "ipv4", 00:13:39.608 "trsvcid": "4420", 00:13:39.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:39.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:39.608 "hdgst": false, 00:13:39.608 "ddgst": false 00:13:39.608 }, 00:13:39.608 "method": "bdev_nvme_attach_controller" 00:13:39.608 }' 00:13:39.608 [2024-05-15 15:51:38.106839] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:39.608 [2024-05-15 15:51:38.106890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3694058 ] 00:13:39.608 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.869 [2024-05-15 15:51:38.178846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.869 [2024-05-15 15:51:38.249336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.128 Running I/O for 10 seconds... 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.388 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.650 [2024-05-15 15:51:38.976852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 [2024-05-15 15:51:38.976917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 [2024-05-15 15:51:38.976927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 [2024-05-15 15:51:38.976936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 [2024-05-15 15:51:38.976945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 [2024-05-15 15:51:38.976953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 [2024-05-15 15:51:38.976962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 [2024-05-15 15:51:38.976970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd292f0 is same with the state(5) to be set 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.650 [2024-05-15 15:51:38.982937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.650 [2024-05-15 15:51:38.982971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.982988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.650 [2024-05-15 15:51:38.982998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.650 [2024-05-15 15:51:38.983019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.650 [2024-05-15 15:51:38.983039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209f0 is same with the state(5) to be set 00:13:40.650 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.650 [2024-05-15 15:51:38.983710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.983988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.983998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.984008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.984019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.984028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.650 [2024-05-15 15:51:38.984039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.650 [2024-05-15 15:51:38.984049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.651 [2024-05-15 15:51:38.984652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.651 [2024-05-15 15:51:38.984662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.984987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.984996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.985006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.985015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.985026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:40.652 [2024-05-15 15:51:38.985035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.652 [2024-05-15 15:51:38.985099] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc31ad0 was disconnected and freed. reset controller. 00:13:40.652 [2024-05-15 15:51:38.985952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:40.652 task offset: 57344 on job bdev=Nvme0n1 fails 00:13:40.652 00:13:40.652 Latency(us) 00:13:40.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.652 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:40.652 Job: Nvme0n1 ended in about 0.41 seconds with error 00:13:40.652 Verification LBA range: start 0x0 length 0x400 00:13:40.652 Nvme0n1 : 0.41 1086.20 67.89 155.17 0.00 50399.34 1671.17 54945.38 00:13:40.652 =================================================================================================================== 00:13:40.652 Total : 1086.20 67.89 155.17 0.00 50399.34 1671.17 54945.38 00:13:40.652 [2024-05-15 15:51:38.987500] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:40.652 [2024-05-15 15:51:38.987516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8209f0 (9): Bad file descriptor 00:13:40.652 15:51:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.652 15:51:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:40.652 [2024-05-15 15:51:39.091600] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:41.591 15:51:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3694058 00:13:41.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3694058) - No such process 00:13:41.591 15:51:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:41.591 15:51:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:41.591 { 00:13:41.591 "params": { 00:13:41.591 "name": "Nvme$subsystem", 00:13:41.591 "trtype": "$TEST_TRANSPORT", 00:13:41.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.591 "adrfam": "ipv4", 00:13:41.591 "trsvcid": "$NVMF_PORT", 00:13:41.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.591 "hdgst": ${hdgst:-false}, 00:13:41.591 "ddgst": ${ddgst:-false} 00:13:41.591 }, 00:13:41.591 "method": "bdev_nvme_attach_controller" 00:13:41.591 } 00:13:41.591 EOF 00:13:41.591 )") 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:41.591 15:51:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:41.591 "params": { 00:13:41.591 "name": "Nvme0", 00:13:41.591 "trtype": "tcp", 00:13:41.591 "traddr": "10.0.0.2", 00:13:41.591 "adrfam": "ipv4", 00:13:41.591 "trsvcid": "4420", 00:13:41.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:41.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:41.591 "hdgst": false, 00:13:41.591 "ddgst": false 00:13:41.591 }, 00:13:41.591 "method": "bdev_nvme_attach_controller" 00:13:41.591 }' 00:13:41.591 [2024-05-15 15:51:40.049347] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:41.591 [2024-05-15 15:51:40.049400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3694477 ] 00:13:41.591 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.591 [2024-05-15 15:51:40.120327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.850 [2024-05-15 15:51:40.193013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.110 Running I/O for 1 seconds... 00:13:43.047 00:13:43.047 Latency(us) 00:13:43.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.047 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.047 Verification LBA range: start 0x0 length 0x400 00:13:43.047 Nvme0n1 : 1.00 1146.77 71.67 0.00 0.00 55147.34 12425.63 54106.52 00:13:43.047 =================================================================================================================== 00:13:43.047 Total : 1146.77 71.67 0.00 0.00 55147.34 12425.63 54106.52 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.307 rmmod nvme_tcp 00:13:43.307 rmmod nvme_fabrics 00:13:43.307 rmmod nvme_keyring 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3693897 ']' 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3693897 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3693897 ']' 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3693897 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3693897 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3693897' 00:13:43.307 killing process with pid 3693897 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3693897 00:13:43.307 [2024-05-15 15:51:41.851452] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:43.307 15:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3693897 00:13:43.567 [2024-05-15 15:51:42.050953] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.567 15:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.107 15:51:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.107 15:51:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:46.107 00:13:46.107 real 0m14.294s 00:13:46.107 user 0m23.772s 00:13:46.107 sys 0m6.609s 00:13:46.107 15:51:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.107 15:51:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:46.107 ************************************ 00:13:46.107 END TEST nvmf_host_management 00:13:46.107 ************************************ 00:13:46.107 15:51:44 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:46.107 15:51:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:46.107 15:51:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:46.107 15:51:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.107 ************************************ 00:13:46.107 START TEST nvmf_lvol 00:13:46.107 ************************************ 00:13:46.107 15:51:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:46.107 * Looking for test storage... 00:13:46.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.107 15:51:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.108 15:51:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:52.678 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:52.678 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.678 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:52.679 Found net devices under 0000:af:00.0: cvl_0_0 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:52.679 Found net devices under 0000:af:00.1: cvl_0_1 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:13:52.679 00:13:52.679 --- 10.0.0.2 ping statistics --- 00:13:52.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.679 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:13:52.679 00:13:52.679 --- 10.0.0.1 ping statistics --- 00:13:52.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.679 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3698446 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3698446 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3698446 ']' 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.679 15:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:52.679 [2024-05-15 15:51:51.022577] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:52.679 [2024-05-15 15:51:51.022625] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.679 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.679 [2024-05-15 15:51:51.096442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.679 [2024-05-15 15:51:51.170077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.679 [2024-05-15 15:51:51.170111] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.679 [2024-05-15 15:51:51.170120] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.679 [2024-05-15 15:51:51.170128] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.679 [2024-05-15 15:51:51.170151] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.679 [2024-05-15 15:51:51.170203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.679 [2024-05-15 15:51:51.170264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.679 [2024-05-15 15:51:51.170267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.615 15:51:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:53.615 15:51:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:13:53.615 15:51:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.615 15:51:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.615 15:51:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:53.615 15:51:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.615 15:51:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:53.615 [2024-05-15 15:51:52.022960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.615 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.874 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:53.874 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:54.133 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:54.133 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:54.133 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:54.392 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5fc040b8-f136-4283-a59b-f22f3f3b8162 00:13:54.392 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5fc040b8-f136-4283-a59b-f22f3f3b8162 lvol 20 00:13:54.652 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3263e436-6774-4356-8ffc-e30bf3d49012 00:13:54.652 15:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.652 15:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3263e436-6774-4356-8ffc-e30bf3d49012 00:13:54.911 15:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:55.169 [2024-05-15 15:51:53.500575] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:55.169 [2024-05-15 15:51:53.500848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.169 15:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.169 15:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3698878 00:13:55.169 15:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:55.169 15:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:55.427 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.403 15:51:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3263e436-6774-4356-8ffc-e30bf3d49012 MY_SNAPSHOT 00:13:56.403 15:51:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=56dd79f5-4b43-4a9e-831c-710811f555c0 00:13:56.403 15:51:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3263e436-6774-4356-8ffc-e30bf3d49012 30 00:13:56.661 15:51:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 56dd79f5-4b43-4a9e-831c-710811f555c0 MY_CLONE 00:13:56.920 15:51:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3e5a6447-fefc-460f-8ba6-a9af4e63ae76 00:13:56.920 15:51:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3e5a6447-fefc-460f-8ba6-a9af4e63ae76 00:13:57.179 15:51:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3698878 00:14:07.161 Initializing NVMe Controllers 00:14:07.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:07.161 Controller IO queue size 128, less than required. 00:14:07.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:07.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:07.161 Initialization complete. Launching workers. 00:14:07.161 ======================================================== 00:14:07.161 Latency(us) 00:14:07.161 Device Information : IOPS MiB/s Average min max 00:14:07.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12202.70 47.67 10492.56 1578.11 80877.70 00:14:07.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12037.80 47.02 10635.07 3646.68 45459.49 00:14:07.161 ======================================================== 00:14:07.161 Total : 24240.50 94.69 10563.33 1578.11 80877.70 00:14:07.161 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3263e436-6774-4356-8ffc-e30bf3d49012 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5fc040b8-f136-4283-a59b-f22f3f3b8162 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.161 rmmod nvme_tcp 00:14:07.161 rmmod nvme_fabrics 00:14:07.161 rmmod nvme_keyring 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3698446 ']' 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3698446 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3698446 ']' 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3698446 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3698446 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:07.161 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:07.162 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3698446' 00:14:07.162 killing process with pid 3698446 00:14:07.162 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3698446 00:14:07.162 [2024-05-15 15:52:04.815696] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:07.162 15:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3698446 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.162 15:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.070 15:52:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.070 00:14:09.070 real 0m22.882s 00:14:09.070 user 1m2.258s 00:14:09.070 sys 0m9.923s 00:14:09.070 15:52:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.070 15:52:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:09.070 ************************************ 00:14:09.070 END TEST nvmf_lvol 00:14:09.070 ************************************ 00:14:09.070 15:52:07 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:09.070 15:52:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.070 15:52:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.070 15:52:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.070 ************************************ 00:14:09.070 START TEST nvmf_lvs_grow 00:14:09.070 ************************************ 00:14:09.070 15:52:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:09.070 * Looking for test storage... 00:14:09.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.070 15:52:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.070 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:09.070 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.071 15:52:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:15.647 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:15.647 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:15.648 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:15.648 Found net devices under 0000:af:00.0: cvl_0_0 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:15.648 Found net devices under 0000:af:00.1: cvl_0_1 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.648 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.907 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.907 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.907 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.907 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.908 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.908 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:14:16.168 00:14:16.168 --- 10.0.0.2 ping statistics --- 00:14:16.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.168 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:14:16.168 00:14:16.168 --- 10.0.0.1 ping statistics --- 00:14:16.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.168 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3704573 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3704573 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3704573 ']' 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:16.168 15:52:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.168 [2024-05-15 15:52:14.589999] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:16.168 [2024-05-15 15:52:14.590046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.168 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.168 [2024-05-15 15:52:14.664994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.428 [2024-05-15 15:52:14.740067] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.428 [2024-05-15 15:52:14.740103] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.428 [2024-05-15 15:52:14.740113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.428 [2024-05-15 15:52:14.740121] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.428 [2024-05-15 15:52:14.740144] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.428 [2024-05-15 15:52:14.740168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.997 15:52:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:16.997 15:52:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:14:16.997 15:52:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.997 15:52:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.997 15:52:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.997 15:52:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.997 15:52:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:17.257 [2024-05-15 15:52:15.587737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:17.257 ************************************ 00:14:17.257 START TEST lvs_grow_clean 00:14:17.257 ************************************ 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:17.257 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:17.258 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:17.258 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:17.258 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:17.258 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:17.258 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.517 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:17.517 15:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:17.517 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:17.518 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:17.518 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:17.784 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:17.784 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:17.784 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5e6e691-4928-4aba-a7a4-e139fafdd93a lvol 150 00:14:18.043 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7b4934d4-e92a-465d-9c1d-68110ddedf65 00:14:18.043 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:18.043 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:18.043 [2024-05-15 15:52:16.521356] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:18.043 [2024-05-15 15:52:16.521403] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:18.043 true 00:14:18.043 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:18.043 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:18.303 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:18.303 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:18.562 15:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b4934d4-e92a-465d-9c1d-68110ddedf65 00:14:18.562 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:18.822 [2024-05-15 15:52:17.191167] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:18.822 [2024-05-15 15:52:17.191460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3705104 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3705104 /var/tmp/bdevperf.sock 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3705104 ']' 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:18.822 15:52:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:19.082 [2024-05-15 15:52:17.417479] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:19.082 [2024-05-15 15:52:17.417531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705104 ] 00:14:19.082 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.082 [2024-05-15 15:52:17.486393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.082 [2024-05-15 15:52:17.558775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.651 15:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:19.651 15:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:14:19.651 15:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:19.911 Nvme0n1 00:14:19.911 15:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:20.171 [ 00:14:20.171 { 00:14:20.171 "name": "Nvme0n1", 00:14:20.171 "aliases": [ 00:14:20.171 "7b4934d4-e92a-465d-9c1d-68110ddedf65" 00:14:20.171 ], 00:14:20.171 "product_name": "NVMe disk", 00:14:20.171 "block_size": 4096, 00:14:20.171 "num_blocks": 38912, 00:14:20.171 "uuid": "7b4934d4-e92a-465d-9c1d-68110ddedf65", 00:14:20.171 "assigned_rate_limits": { 00:14:20.171 "rw_ios_per_sec": 0, 00:14:20.171 "rw_mbytes_per_sec": 0, 00:14:20.171 "r_mbytes_per_sec": 0, 00:14:20.171 "w_mbytes_per_sec": 0 00:14:20.171 }, 00:14:20.171 "claimed": false, 00:14:20.171 "zoned": false, 00:14:20.171 "supported_io_types": { 00:14:20.171 "read": true, 00:14:20.171 "write": true, 00:14:20.171 "unmap": true, 00:14:20.171 "write_zeroes": true, 00:14:20.171 "flush": true, 00:14:20.171 "reset": true, 00:14:20.171 "compare": true, 00:14:20.171 "compare_and_write": true, 00:14:20.171 "abort": true, 00:14:20.171 "nvme_admin": true, 00:14:20.171 "nvme_io": true 00:14:20.171 }, 00:14:20.171 "memory_domains": [ 00:14:20.171 { 00:14:20.171 "dma_device_id": "system", 00:14:20.171 "dma_device_type": 1 00:14:20.171 } 00:14:20.171 ], 00:14:20.171 "driver_specific": { 00:14:20.171 "nvme": [ 00:14:20.171 { 00:14:20.171 "trid": { 00:14:20.171 "trtype": "TCP", 00:14:20.171 "adrfam": "IPv4", 00:14:20.171 "traddr": "10.0.0.2", 00:14:20.171 "trsvcid": "4420", 00:14:20.171 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:20.171 }, 00:14:20.171 "ctrlr_data": { 00:14:20.171 "cntlid": 1, 00:14:20.171 "vendor_id": "0x8086", 00:14:20.171 "model_number": "SPDK bdev Controller", 00:14:20.171 "serial_number": "SPDK0", 00:14:20.171 "firmware_revision": "24.05", 00:14:20.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.171 "oacs": { 00:14:20.171 "security": 0, 00:14:20.171 "format": 0, 00:14:20.171 "firmware": 0, 00:14:20.171 "ns_manage": 0 00:14:20.171 }, 00:14:20.171 "multi_ctrlr": true, 00:14:20.171 "ana_reporting": false 00:14:20.171 }, 00:14:20.171 "vs": { 00:14:20.171 "nvme_version": "1.3" 00:14:20.171 }, 00:14:20.171 "ns_data": { 00:14:20.171 "id": 1, 00:14:20.171 "can_share": true 00:14:20.171 } 00:14:20.171 } 00:14:20.171 ], 00:14:20.171 "mp_policy": "active_passive" 00:14:20.171 } 00:14:20.171 } 00:14:20.171 ] 00:14:20.171 15:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3705236 00:14:20.171 15:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:20.171 15:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.171 Running I/O for 10 seconds... 00:14:21.588 Latency(us) 00:14:21.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.588 Nvme0n1 : 1.00 23614.00 92.24 0.00 0.00 0.00 0.00 0.00 00:14:21.588 =================================================================================================================== 00:14:21.588 Total : 23614.00 92.24 0.00 0.00 0.00 0.00 0.00 00:14:21.588 00:14:22.155 15:52:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:22.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.414 Nvme0n1 : 2.00 23688.50 92.53 0.00 0.00 0.00 0.00 0.00 00:14:22.414 =================================================================================================================== 00:14:22.414 Total : 23688.50 92.53 0.00 0.00 0.00 0.00 0.00 00:14:22.414 00:14:22.414 true 00:14:22.414 15:52:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:22.414 15:52:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:22.673 15:52:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:22.673 15:52:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:22.673 15:52:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3705236 00:14:23.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.242 Nvme0n1 : 3.00 23915.00 93.42 0.00 0.00 0.00 0.00 0.00 00:14:23.242 =================================================================================================================== 00:14:23.242 Total : 23915.00 93.42 0.00 0.00 0.00 0.00 0.00 00:14:23.242 00:14:24.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.180 Nvme0n1 : 4.00 23936.00 93.50 0.00 0.00 0.00 0.00 0.00 00:14:24.180 =================================================================================================================== 00:14:24.180 Total : 23936.00 93.50 0.00 0.00 0.00 0.00 0.00 00:14:24.180 00:14:25.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.556 Nvme0n1 : 5.00 24044.00 93.92 0.00 0.00 0.00 0.00 0.00 00:14:25.556 =================================================================================================================== 00:14:25.556 Total : 24044.00 93.92 0.00 0.00 0.00 0.00 0.00 00:14:25.556 00:14:26.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.492 Nvme0n1 : 6.00 24132.50 94.27 0.00 0.00 0.00 0.00 0.00 00:14:26.492 =================================================================================================================== 00:14:26.492 Total : 24132.50 94.27 0.00 0.00 0.00 0.00 0.00 00:14:26.492 00:14:27.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.429 Nvme0n1 : 7.00 24103.86 94.16 0.00 0.00 0.00 0.00 0.00 00:14:27.429 =================================================================================================================== 00:14:27.429 Total : 24103.86 94.16 0.00 0.00 0.00 0.00 0.00 00:14:27.429 00:14:28.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.367 Nvme0n1 : 8.00 24039.38 93.90 0.00 0.00 0.00 0.00 0.00 00:14:28.367 =================================================================================================================== 00:14:28.367 Total : 24039.38 93.90 0.00 0.00 0.00 0.00 0.00 00:14:28.367 00:14:29.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.306 Nvme0n1 : 9.00 23991.89 93.72 0.00 0.00 0.00 0.00 0.00 00:14:29.306 =================================================================================================================== 00:14:29.306 Total : 23991.89 93.72 0.00 0.00 0.00 0.00 0.00 00:14:29.306 00:14:30.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.245 Nvme0n1 : 10.00 23955.10 93.57 0.00 0.00 0.00 0.00 0.00 00:14:30.245 =================================================================================================================== 00:14:30.245 Total : 23955.10 93.57 0.00 0.00 0.00 0.00 0.00 00:14:30.245 00:14:30.245 00:14:30.245 Latency(us) 00:14:30.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.245 Nvme0n1 : 10.01 23954.68 93.57 0.00 0.00 5339.29 2162.69 24012.39 00:14:30.245 =================================================================================================================== 00:14:30.245 Total : 23954.68 93.57 0.00 0.00 5339.29 2162.69 24012.39 00:14:30.245 0 00:14:30.245 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3705104 00:14:30.245 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3705104 ']' 00:14:30.245 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3705104 00:14:30.245 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:14:30.245 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:30.245 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3705104 00:14:30.504 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:30.504 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:30.504 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3705104' 00:14:30.504 killing process with pid 3705104 00:14:30.504 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3705104 00:14:30.504 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.504 00:14:30.504 Latency(us) 00:14:30.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.504 =================================================================================================================== 00:14:30.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.504 15:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3705104 00:14:30.504 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:30.764 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.023 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:31.023 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:31.024 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:31.024 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:31.024 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:31.284 [2024-05-15 15:52:29.733654] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:31.284 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:31.550 request: 00:14:31.550 { 00:14:31.550 "uuid": "d5e6e691-4928-4aba-a7a4-e139fafdd93a", 00:14:31.550 "method": "bdev_lvol_get_lvstores", 00:14:31.550 "req_id": 1 00:14:31.550 } 00:14:31.550 Got JSON-RPC error response 00:14:31.550 response: 00:14:31.550 { 00:14:31.550 "code": -19, 00:14:31.550 "message": "No such device" 00:14:31.550 } 00:14:31.550 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:31.551 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:31.551 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:31.551 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:31.551 15:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:31.812 aio_bdev 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7b4934d4-e92a-465d-9c1d-68110ddedf65 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=7b4934d4-e92a-465d-9c1d-68110ddedf65 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:31.812 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7b4934d4-e92a-465d-9c1d-68110ddedf65 -t 2000 00:14:32.071 [ 00:14:32.071 { 00:14:32.071 "name": "7b4934d4-e92a-465d-9c1d-68110ddedf65", 00:14:32.071 "aliases": [ 00:14:32.071 "lvs/lvol" 00:14:32.071 ], 00:14:32.071 "product_name": "Logical Volume", 00:14:32.071 "block_size": 4096, 00:14:32.071 "num_blocks": 38912, 00:14:32.071 "uuid": "7b4934d4-e92a-465d-9c1d-68110ddedf65", 00:14:32.071 "assigned_rate_limits": { 00:14:32.071 "rw_ios_per_sec": 0, 00:14:32.071 "rw_mbytes_per_sec": 0, 00:14:32.071 "r_mbytes_per_sec": 0, 00:14:32.071 "w_mbytes_per_sec": 0 00:14:32.071 }, 00:14:32.071 "claimed": false, 00:14:32.071 "zoned": false, 00:14:32.071 "supported_io_types": { 00:14:32.071 "read": true, 00:14:32.071 "write": true, 00:14:32.071 "unmap": true, 00:14:32.071 "write_zeroes": true, 00:14:32.071 "flush": false, 00:14:32.071 "reset": true, 00:14:32.071 "compare": false, 00:14:32.071 "compare_and_write": false, 00:14:32.071 "abort": false, 00:14:32.071 "nvme_admin": false, 00:14:32.071 "nvme_io": false 00:14:32.071 }, 00:14:32.071 "driver_specific": { 00:14:32.071 "lvol": { 00:14:32.071 "lvol_store_uuid": "d5e6e691-4928-4aba-a7a4-e139fafdd93a", 00:14:32.071 "base_bdev": "aio_bdev", 00:14:32.071 "thin_provision": false, 00:14:32.071 "num_allocated_clusters": 38, 00:14:32.071 "snapshot": false, 00:14:32.071 "clone": false, 00:14:32.071 "esnap_clone": false 00:14:32.071 } 00:14:32.071 } 00:14:32.071 } 00:14:32.071 ] 00:14:32.071 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:14:32.071 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:32.071 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:32.332 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:32.332 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:32.332 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:32.332 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:32.332 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b4934d4-e92a-465d-9c1d-68110ddedf65 00:14:32.591 15:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5e6e691-4928-4aba-a7a4-e139fafdd93a 00:14:32.851 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:32.851 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:32.851 00:14:32.851 real 0m15.728s 00:14:32.851 user 0m14.750s 00:14:32.851 sys 0m2.026s 00:14:32.851 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.851 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:32.851 ************************************ 00:14:32.851 END TEST lvs_grow_clean 00:14:32.851 ************************************ 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:33.111 ************************************ 00:14:33.111 START TEST lvs_grow_dirty 00:14:33.111 ************************************ 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:33.111 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:33.370 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:33.370 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:33.370 15:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:33.630 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:33.630 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:33.630 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 lvol 150 00:14:33.630 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d57d5fc8-c068-4f16-af89-0a603af8d75e 00:14:33.630 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:33.630 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:33.890 [2024-05-15 15:52:32.332856] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:33.890 [2024-05-15 15:52:32.332908] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:33.890 true 00:14:33.890 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:33.890 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:34.149 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:34.149 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:34.149 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d57d5fc8-c068-4f16-af89-0a603af8d75e 00:14:34.409 15:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:34.669 [2024-05-15 15:52:32.990802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3707888 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3707888 /var/tmp/bdevperf.sock 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3707888 ']' 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 15:52:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:34.669 [2024-05-15 15:52:33.214722] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:34.669 [2024-05-15 15:52:33.214773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3707888 ] 00:14:34.929 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.929 [2024-05-15 15:52:33.284033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.929 [2024-05-15 15:52:33.352775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.496 15:52:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:35.496 15:52:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:35.496 15:52:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:35.756 Nvme0n1 00:14:35.756 15:52:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:36.015 [ 00:14:36.015 { 00:14:36.015 "name": "Nvme0n1", 00:14:36.015 "aliases": [ 00:14:36.015 "d57d5fc8-c068-4f16-af89-0a603af8d75e" 00:14:36.015 ], 00:14:36.015 "product_name": "NVMe disk", 00:14:36.015 "block_size": 4096, 00:14:36.015 "num_blocks": 38912, 00:14:36.015 "uuid": "d57d5fc8-c068-4f16-af89-0a603af8d75e", 00:14:36.015 "assigned_rate_limits": { 00:14:36.015 "rw_ios_per_sec": 0, 00:14:36.015 "rw_mbytes_per_sec": 0, 00:14:36.016 "r_mbytes_per_sec": 0, 00:14:36.016 "w_mbytes_per_sec": 0 00:14:36.016 }, 00:14:36.016 "claimed": false, 00:14:36.016 "zoned": false, 00:14:36.016 "supported_io_types": { 00:14:36.016 "read": true, 00:14:36.016 "write": true, 00:14:36.016 "unmap": true, 00:14:36.016 "write_zeroes": true, 00:14:36.016 "flush": true, 00:14:36.016 "reset": true, 00:14:36.016 "compare": true, 00:14:36.016 "compare_and_write": true, 00:14:36.016 "abort": true, 00:14:36.016 "nvme_admin": true, 00:14:36.016 "nvme_io": true 00:14:36.016 }, 00:14:36.016 "memory_domains": [ 00:14:36.016 { 00:14:36.016 "dma_device_id": "system", 00:14:36.016 "dma_device_type": 1 00:14:36.016 } 00:14:36.016 ], 00:14:36.016 "driver_specific": { 00:14:36.016 "nvme": [ 00:14:36.016 { 00:14:36.016 "trid": { 00:14:36.016 "trtype": "TCP", 00:14:36.016 "adrfam": "IPv4", 00:14:36.016 "traddr": "10.0.0.2", 00:14:36.016 "trsvcid": "4420", 00:14:36.016 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:36.016 }, 00:14:36.016 "ctrlr_data": { 00:14:36.016 "cntlid": 1, 00:14:36.016 "vendor_id": "0x8086", 00:14:36.016 "model_number": "SPDK bdev Controller", 00:14:36.016 "serial_number": "SPDK0", 00:14:36.016 "firmware_revision": "24.05", 00:14:36.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:36.016 "oacs": { 00:14:36.016 "security": 0, 00:14:36.016 "format": 0, 00:14:36.016 "firmware": 0, 00:14:36.016 "ns_manage": 0 00:14:36.016 }, 00:14:36.016 "multi_ctrlr": true, 00:14:36.016 "ana_reporting": false 00:14:36.016 }, 00:14:36.016 "vs": { 00:14:36.016 "nvme_version": "1.3" 00:14:36.016 }, 00:14:36.016 "ns_data": { 00:14:36.016 "id": 1, 00:14:36.016 "can_share": true 00:14:36.016 } 00:14:36.016 } 00:14:36.016 ], 00:14:36.016 "mp_policy": "active_passive" 00:14:36.016 } 00:14:36.016 } 00:14:36.016 ] 00:14:36.016 15:52:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3708083 00:14:36.016 15:52:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:36.016 15:52:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:36.016 Running I/O for 10 seconds... 00:14:37.013 Latency(us) 00:14:37.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.013 Nvme0n1 : 1.00 23100.00 90.23 0.00 0.00 0.00 0.00 0.00 00:14:37.013 =================================================================================================================== 00:14:37.013 Total : 23100.00 90.23 0.00 0.00 0.00 0.00 0.00 00:14:37.013 00:14:37.952 15:52:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:38.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.211 Nvme0n1 : 2.00 23194.00 90.60 0.00 0.00 0.00 0.00 0.00 00:14:38.211 =================================================================================================================== 00:14:38.211 Total : 23194.00 90.60 0.00 0.00 0.00 0.00 0.00 00:14:38.211 00:14:38.211 true 00:14:38.211 15:52:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:38.211 15:52:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:38.470 15:52:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:38.470 15:52:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:38.470 15:52:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3708083 00:14:39.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.039 Nvme0n1 : 3.00 23267.67 90.89 0.00 0.00 0.00 0.00 0.00 00:14:39.039 =================================================================================================================== 00:14:39.039 Total : 23267.67 90.89 0.00 0.00 0.00 0.00 0.00 00:14:39.039 00:14:40.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.419 Nvme0n1 : 4.00 23422.25 91.49 0.00 0.00 0.00 0.00 0.00 00:14:40.419 =================================================================================================================== 00:14:40.419 Total : 23422.25 91.49 0.00 0.00 0.00 0.00 0.00 00:14:40.419 00:14:41.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.357 Nvme0n1 : 5.00 23648.40 92.38 0.00 0.00 0.00 0.00 0.00 00:14:41.357 =================================================================================================================== 00:14:41.357 Total : 23648.40 92.38 0.00 0.00 0.00 0.00 0.00 00:14:41.357 00:14:42.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.294 Nvme0n1 : 6.00 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:14:42.294 =================================================================================================================== 00:14:42.294 Total : 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:14:42.294 00:14:43.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.233 Nvme0n1 : 7.00 23651.14 92.39 0.00 0.00 0.00 0.00 0.00 00:14:43.233 =================================================================================================================== 00:14:43.233 Total : 23651.14 92.39 0.00 0.00 0.00 0.00 0.00 00:14:43.233 00:14:44.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.171 Nvme0n1 : 8.00 23770.25 92.85 0.00 0.00 0.00 0.00 0.00 00:14:44.171 =================================================================================================================== 00:14:44.171 Total : 23770.25 92.85 0.00 0.00 0.00 0.00 0.00 00:14:44.171 00:14:45.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.109 Nvme0n1 : 9.00 23866.89 93.23 0.00 0.00 0.00 0.00 0.00 00:14:45.109 =================================================================================================================== 00:14:45.109 Total : 23866.89 93.23 0.00 0.00 0.00 0.00 0.00 00:14:45.109 00:14:46.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.046 Nvme0n1 : 10.00 23918.60 93.43 0.00 0.00 0.00 0.00 0.00 00:14:46.046 =================================================================================================================== 00:14:46.046 Total : 23918.60 93.43 0.00 0.00 0.00 0.00 0.00 00:14:46.046 00:14:46.046 00:14:46.046 Latency(us) 00:14:46.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.046 Nvme0n1 : 10.01 23918.20 93.43 0.00 0.00 5347.93 2215.12 21286.09 00:14:46.046 =================================================================================================================== 00:14:46.046 Total : 23918.20 93.43 0.00 0.00 5347.93 2215.12 21286.09 00:14:46.046 0 00:14:46.046 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3707888 00:14:46.046 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3707888 ']' 00:14:46.046 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3707888 00:14:46.046 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:46.046 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:46.046 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3707888 00:14:46.306 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:46.306 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:46.306 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3707888' 00:14:46.306 killing process with pid 3707888 00:14:46.306 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3707888 00:14:46.306 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.306 00:14:46.306 Latency(us) 00:14:46.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.306 =================================================================================================================== 00:14:46.306 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.306 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3707888 00:14:46.306 15:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:46.565 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.824 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:46.824 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3704573 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3704573 00:14:47.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3704573 Killed "${NVMF_APP[@]}" "$@" 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3709978 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3709978 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3709978 ']' 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:47.083 15:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:47.083 [2024-05-15 15:52:45.504130] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:47.083 [2024-05-15 15:52:45.504182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.083 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.083 [2024-05-15 15:52:45.578783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.343 [2024-05-15 15:52:45.652510] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.343 [2024-05-15 15:52:45.652543] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.343 [2024-05-15 15:52:45.652552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.343 [2024-05-15 15:52:45.652561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.343 [2024-05-15 15:52:45.652568] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.343 [2024-05-15 15:52:45.652587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.912 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:47.912 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:47.912 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.912 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.912 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:47.912 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.912 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.171 [2024-05-15 15:52:46.506728] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:48.171 [2024-05-15 15:52:46.506824] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:48.171 [2024-05-15 15:52:46.506851] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d57d5fc8-c068-4f16-af89-0a603af8d75e 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=d57d5fc8-c068-4f16-af89-0a603af8d75e 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.171 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d57d5fc8-c068-4f16-af89-0a603af8d75e -t 2000 00:14:48.430 [ 00:14:48.430 { 00:14:48.430 "name": "d57d5fc8-c068-4f16-af89-0a603af8d75e", 00:14:48.430 "aliases": [ 00:14:48.430 "lvs/lvol" 00:14:48.430 ], 00:14:48.430 "product_name": "Logical Volume", 00:14:48.430 "block_size": 4096, 00:14:48.430 "num_blocks": 38912, 00:14:48.430 "uuid": "d57d5fc8-c068-4f16-af89-0a603af8d75e", 00:14:48.430 "assigned_rate_limits": { 00:14:48.430 "rw_ios_per_sec": 0, 00:14:48.430 "rw_mbytes_per_sec": 0, 00:14:48.430 "r_mbytes_per_sec": 0, 00:14:48.431 "w_mbytes_per_sec": 0 00:14:48.431 }, 00:14:48.431 "claimed": false, 00:14:48.431 "zoned": false, 00:14:48.431 "supported_io_types": { 00:14:48.431 "read": true, 00:14:48.431 "write": true, 00:14:48.431 "unmap": true, 00:14:48.431 "write_zeroes": true, 00:14:48.431 "flush": false, 00:14:48.431 "reset": true, 00:14:48.431 "compare": false, 00:14:48.431 "compare_and_write": false, 00:14:48.431 "abort": false, 00:14:48.431 "nvme_admin": false, 00:14:48.431 "nvme_io": false 00:14:48.431 }, 00:14:48.431 "driver_specific": { 00:14:48.431 "lvol": { 00:14:48.431 "lvol_store_uuid": "fa67d660-225a-4070-bc80-e1f87eb9c5e8", 00:14:48.431 "base_bdev": "aio_bdev", 00:14:48.431 "thin_provision": false, 00:14:48.431 "num_allocated_clusters": 38, 00:14:48.431 "snapshot": false, 00:14:48.431 "clone": false, 00:14:48.431 "esnap_clone": false 00:14:48.431 } 00:14:48.431 } 00:14:48.431 } 00:14:48.431 ] 00:14:48.431 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:48.431 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:48.431 15:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:48.690 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:48.690 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:48.690 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:48.690 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:48.690 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.949 [2024-05-15 15:52:47.375085] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:48.949 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:48.949 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:48.949 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:48.949 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.949 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.949 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.949 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.950 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.950 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.950 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.950 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:48.950 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:49.209 request: 00:14:49.209 { 00:14:49.209 "uuid": "fa67d660-225a-4070-bc80-e1f87eb9c5e8", 00:14:49.209 "method": "bdev_lvol_get_lvstores", 00:14:49.209 "req_id": 1 00:14:49.209 } 00:14:49.209 Got JSON-RPC error response 00:14:49.209 response: 00:14:49.209 { 00:14:49.209 "code": -19, 00:14:49.209 "message": "No such device" 00:14:49.209 } 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:49.209 aio_bdev 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d57d5fc8-c068-4f16-af89-0a603af8d75e 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=d57d5fc8-c068-4f16-af89-0a603af8d75e 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:49.209 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:49.468 15:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d57d5fc8-c068-4f16-af89-0a603af8d75e -t 2000 00:14:49.728 [ 00:14:49.728 { 00:14:49.728 "name": "d57d5fc8-c068-4f16-af89-0a603af8d75e", 00:14:49.728 "aliases": [ 00:14:49.728 "lvs/lvol" 00:14:49.728 ], 00:14:49.728 "product_name": "Logical Volume", 00:14:49.728 "block_size": 4096, 00:14:49.728 "num_blocks": 38912, 00:14:49.728 "uuid": "d57d5fc8-c068-4f16-af89-0a603af8d75e", 00:14:49.728 "assigned_rate_limits": { 00:14:49.728 "rw_ios_per_sec": 0, 00:14:49.728 "rw_mbytes_per_sec": 0, 00:14:49.728 "r_mbytes_per_sec": 0, 00:14:49.728 "w_mbytes_per_sec": 0 00:14:49.728 }, 00:14:49.728 "claimed": false, 00:14:49.728 "zoned": false, 00:14:49.728 "supported_io_types": { 00:14:49.728 "read": true, 00:14:49.728 "write": true, 00:14:49.728 "unmap": true, 00:14:49.728 "write_zeroes": true, 00:14:49.728 "flush": false, 00:14:49.728 "reset": true, 00:14:49.728 "compare": false, 00:14:49.728 "compare_and_write": false, 00:14:49.728 "abort": false, 00:14:49.728 "nvme_admin": false, 00:14:49.728 "nvme_io": false 00:14:49.728 }, 00:14:49.728 "driver_specific": { 00:14:49.728 "lvol": { 00:14:49.728 "lvol_store_uuid": "fa67d660-225a-4070-bc80-e1f87eb9c5e8", 00:14:49.728 "base_bdev": "aio_bdev", 00:14:49.728 "thin_provision": false, 00:14:49.728 "num_allocated_clusters": 38, 00:14:49.728 "snapshot": false, 00:14:49.728 "clone": false, 00:14:49.728 "esnap_clone": false 00:14:49.728 } 00:14:49.728 } 00:14:49.728 } 00:14:49.728 ] 00:14:49.728 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:49.728 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:49.728 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:49.728 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:49.728 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:49.728 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:49.987 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:49.987 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d57d5fc8-c068-4f16-af89-0a603af8d75e 00:14:50.247 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa67d660-225a-4070-bc80-e1f87eb9c5e8 00:14:50.247 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:50.507 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:50.507 00:14:50.507 real 0m17.506s 00:14:50.507 user 0m43.723s 00:14:50.507 sys 0m4.937s 00:14:50.507 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:50.507 15:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:50.507 ************************************ 00:14:50.507 END TEST lvs_grow_dirty 00:14:50.507 ************************************ 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:50.507 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:50.507 nvmf_trace.0 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.766 rmmod nvme_tcp 00:14:50.766 rmmod nvme_fabrics 00:14:50.766 rmmod nvme_keyring 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3709978 ']' 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3709978 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3709978 ']' 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3709978 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3709978 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3709978' 00:14:50.766 killing process with pid 3709978 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3709978 00:14:50.766 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3709978 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.026 15:52:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.028 15:52:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:53.028 00:14:53.028 real 0m44.247s 00:14:53.028 user 1m4.663s 00:14:53.028 sys 0m13.005s 00:14:53.028 15:52:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:53.028 15:52:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:53.028 ************************************ 00:14:53.028 END TEST nvmf_lvs_grow 00:14:53.028 ************************************ 00:14:53.028 15:52:51 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:53.028 15:52:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:53.028 15:52:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:53.028 15:52:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.028 ************************************ 00:14:53.028 START TEST nvmf_bdev_io_wait 00:14:53.028 ************************************ 00:14:53.028 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:53.288 * Looking for test storage... 00:14:53.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.288 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.289 15:52:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:59.860 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:59.860 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:59.860 Found net devices under 0000:af:00.0: cvl_0_0 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:59.860 Found net devices under 0000:af:00.1: cvl_0_1 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.860 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.861 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.861 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.861 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.861 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.861 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:00.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:15:00.120 00:15:00.120 --- 10.0.0.2 ping statistics --- 00:15:00.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.120 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:15:00.120 00:15:00.120 --- 10.0.0.1 ping statistics --- 00:15:00.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.120 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3714327 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3714327 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3714327 ']' 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.120 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:00.121 15:52:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.121 [2024-05-15 15:52:58.667418] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:00.121 [2024-05-15 15:52:58.667467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.380 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.380 [2024-05-15 15:52:58.740772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.380 [2024-05-15 15:52:58.816852] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.380 [2024-05-15 15:52:58.816887] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.380 [2024-05-15 15:52:58.816897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.380 [2024-05-15 15:52:58.816905] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.380 [2024-05-15 15:52:58.816912] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.380 [2024-05-15 15:52:58.816957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.380 [2024-05-15 15:52:58.817051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.380 [2024-05-15 15:52:58.817136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.380 [2024-05-15 15:52:58.817137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.948 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:00.948 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:15:00.948 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.948 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.948 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 [2024-05-15 15:52:59.595382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 Malloc0 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.208 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:01.209 [2024-05-15 15:52:59.653951] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:01.209 [2024-05-15 15:52:59.654229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3714609 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3714611 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:01.209 { 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme$subsystem", 00:15:01.209 "trtype": "$TEST_TRANSPORT", 00:15:01.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "$NVMF_PORT", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.209 "hdgst": ${hdgst:-false}, 00:15:01.209 "ddgst": ${ddgst:-false} 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 } 00:15:01.209 EOF 00:15:01.209 )") 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3714613 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:01.209 { 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme$subsystem", 00:15:01.209 "trtype": "$TEST_TRANSPORT", 00:15:01.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "$NVMF_PORT", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.209 "hdgst": ${hdgst:-false}, 00:15:01.209 "ddgst": ${ddgst:-false} 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 } 00:15:01.209 EOF 00:15:01.209 )") 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3714616 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:01.209 { 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme$subsystem", 00:15:01.209 "trtype": "$TEST_TRANSPORT", 00:15:01.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "$NVMF_PORT", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.209 "hdgst": ${hdgst:-false}, 00:15:01.209 "ddgst": ${ddgst:-false} 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 } 00:15:01.209 EOF 00:15:01.209 )") 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:01.209 { 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme$subsystem", 00:15:01.209 "trtype": "$TEST_TRANSPORT", 00:15:01.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "$NVMF_PORT", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.209 "hdgst": ${hdgst:-false}, 00:15:01.209 "ddgst": ${ddgst:-false} 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 } 00:15:01.209 EOF 00:15:01.209 )") 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3714609 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme1", 00:15:01.209 "trtype": "tcp", 00:15:01.209 "traddr": "10.0.0.2", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "4420", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.209 "hdgst": false, 00:15:01.209 "ddgst": false 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 }' 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme1", 00:15:01.209 "trtype": "tcp", 00:15:01.209 "traddr": "10.0.0.2", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "4420", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.209 "hdgst": false, 00:15:01.209 "ddgst": false 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 }' 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme1", 00:15:01.209 "trtype": "tcp", 00:15:01.209 "traddr": "10.0.0.2", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "4420", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.209 "hdgst": false, 00:15:01.209 "ddgst": false 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 }' 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:01.209 15:52:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:01.209 "params": { 00:15:01.209 "name": "Nvme1", 00:15:01.209 "trtype": "tcp", 00:15:01.209 "traddr": "10.0.0.2", 00:15:01.209 "adrfam": "ipv4", 00:15:01.209 "trsvcid": "4420", 00:15:01.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.209 "hdgst": false, 00:15:01.209 "ddgst": false 00:15:01.209 }, 00:15:01.209 "method": "bdev_nvme_attach_controller" 00:15:01.209 }' 00:15:01.209 [2024-05-15 15:52:59.704807] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:01.209 [2024-05-15 15:52:59.704806] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:01.210 [2024-05-15 15:52:59.704860] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 15:52:59.704861] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:01.210 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:01.210 [2024-05-15 15:52:59.708101] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:01.210 [2024-05-15 15:52:59.708142] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:01.210 [2024-05-15 15:52:59.711716] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:01.210 [2024-05-15 15:52:59.711765] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:01.210 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.469 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.469 [2024-05-15 15:52:59.890016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.469 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.469 [2024-05-15 15:52:59.965213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:01.469 [2024-05-15 15:52:59.985794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.469 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.729 [2024-05-15 15:53:00.042456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.729 [2024-05-15 15:53:00.064941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:01.729 [2024-05-15 15:53:00.120180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:01.729 [2024-05-15 15:53:00.134056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.729 [2024-05-15 15:53:00.213079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:01.729 Running I/O for 1 seconds... 00:15:01.729 Running I/O for 1 seconds... 00:15:01.988 Running I/O for 1 seconds... 00:15:01.988 Running I/O for 1 seconds... 00:15:02.928 00:15:02.928 Latency(us) 00:15:02.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.928 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:02.928 Nvme1n1 : 1.00 13248.37 51.75 0.00 0.00 9634.25 4849.66 32086.43 00:15:02.928 =================================================================================================================== 00:15:02.928 Total : 13248.37 51.75 0.00 0.00 9634.25 4849.66 32086.43 00:15:02.928 00:15:02.928 Latency(us) 00:15:02.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.928 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:02.928 Nvme1n1 : 1.01 6691.41 26.14 0.00 0.00 18999.55 5190.45 24326.96 00:15:02.928 =================================================================================================================== 00:15:02.928 Total : 6691.41 26.14 0.00 0.00 18999.55 5190.45 24326.96 00:15:02.928 00:15:02.928 Latency(us) 00:15:02.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.928 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:02.928 Nvme1n1 : 1.00 257779.69 1006.95 0.00 0.00 494.94 209.72 632.42 00:15:02.928 =================================================================================================================== 00:15:02.928 Total : 257779.69 1006.95 0.00 0.00 494.94 209.72 632.42 00:15:02.928 00:15:02.928 Latency(us) 00:15:02.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.929 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:02.929 Nvme1n1 : 1.01 7657.84 29.91 0.00 0.00 16656.76 5924.45 43201.33 00:15:02.929 =================================================================================================================== 00:15:02.929 Total : 7657.84 29.91 0.00 0.00 16656.76 5924.45 43201.33 00:15:02.929 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3714611 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3714613 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3714616 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.188 rmmod nvme_tcp 00:15:03.188 rmmod nvme_fabrics 00:15:03.188 rmmod nvme_keyring 00:15:03.188 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3714327 ']' 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3714327 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3714327 ']' 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3714327 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3714327 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3714327' 00:15:03.448 killing process with pid 3714327 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3714327 00:15:03.448 [2024-05-15 15:53:01.811608] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:03.448 15:53:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3714327 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.448 15:53:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.987 15:53:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:05.987 00:15:05.987 real 0m12.509s 00:15:05.987 user 0m19.855s 00:15:05.987 sys 0m7.241s 00:15:05.987 15:53:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:05.987 15:53:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.987 ************************************ 00:15:05.987 END TEST nvmf_bdev_io_wait 00:15:05.987 ************************************ 00:15:05.987 15:53:04 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:05.987 15:53:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:05.987 15:53:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:05.987 15:53:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:05.987 ************************************ 00:15:05.987 START TEST nvmf_queue_depth 00:15:05.987 ************************************ 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:05.987 * Looking for test storage... 00:15:05.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:05.987 15:53:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:05.988 15:53:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:12.562 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:12.562 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:12.563 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:12.563 Found net devices under 0000:af:00.0: cvl_0_0 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:12.563 Found net devices under 0000:af:00.1: cvl_0_1 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.563 15:53:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:15:12.563 00:15:12.563 --- 10.0.0.2 ping statistics --- 00:15:12.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.563 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:15:12.563 00:15:12.563 --- 10.0.0.1 ping statistics --- 00:15:12.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.563 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3718609 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3718609 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3718609 ']' 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:12.563 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.563 [2024-05-15 15:53:11.121773] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:12.563 [2024-05-15 15:53:11.121820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.822 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.822 [2024-05-15 15:53:11.195489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.822 [2024-05-15 15:53:11.268045] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.822 [2024-05-15 15:53:11.268079] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.822 [2024-05-15 15:53:11.268088] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.822 [2024-05-15 15:53:11.268097] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.822 [2024-05-15 15:53:11.268104] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.822 [2024-05-15 15:53:11.268128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.390 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:13.390 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:13.390 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.390 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.390 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.650 [2024-05-15 15:53:11.962166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.650 Malloc0 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.650 15:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.650 [2024-05-15 15:53:12.019830] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:13.650 [2024-05-15 15:53:12.020058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3718882 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3718882 /var/tmp/bdevperf.sock 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3718882 ']' 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.650 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.650 [2024-05-15 15:53:12.069168] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:13.650 [2024-05-15 15:53:12.069219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718882 ] 00:15:13.650 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.650 [2024-05-15 15:53:12.138669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.910 [2024-05-15 15:53:12.214045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.478 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:14.478 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:14.478 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:14.478 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.478 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:14.478 NVMe0n1 00:15:14.478 15:53:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.478 15:53:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.737 Running I/O for 10 seconds... 00:15:24.763 00:15:24.763 Latency(us) 00:15:24.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.763 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:24.763 Verification LBA range: start 0x0 length 0x4000 00:15:24.763 NVMe0n1 : 10.07 12917.31 50.46 0.00 0.00 79035.26 19398.66 57461.96 00:15:24.763 =================================================================================================================== 00:15:24.763 Total : 12917.31 50.46 0.00 0.00 79035.26 19398.66 57461.96 00:15:24.763 0 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3718882 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3718882 ']' 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3718882 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3718882 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3718882' 00:15:24.763 killing process with pid 3718882 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3718882 00:15:24.763 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.763 00:15:24.763 Latency(us) 00:15:24.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.763 =================================================================================================================== 00:15:24.763 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.763 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3718882 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.023 rmmod nvme_tcp 00:15:25.023 rmmod nvme_fabrics 00:15:25.023 rmmod nvme_keyring 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3718609 ']' 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3718609 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3718609 ']' 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3718609 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3718609 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3718609' 00:15:25.023 killing process with pid 3718609 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3718609 00:15:25.023 [2024-05-15 15:53:23.553028] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:25.023 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3718609 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.283 15:53:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.820 15:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.820 00:15:27.820 real 0m21.688s 00:15:27.820 user 0m24.789s 00:15:27.820 sys 0m7.189s 00:15:27.820 15:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:27.820 15:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.820 ************************************ 00:15:27.820 END TEST nvmf_queue_depth 00:15:27.820 ************************************ 00:15:27.820 15:53:25 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:27.820 15:53:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:27.820 15:53:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:27.820 15:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.820 ************************************ 00:15:27.820 START TEST nvmf_target_multipath 00:15:27.820 ************************************ 00:15:27.820 15:53:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:27.820 * Looking for test storage... 00:15:27.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.820 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.821 15:53:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:34.395 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:34.395 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:34.395 Found net devices under 0000:af:00.0: cvl_0_0 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:34.395 Found net devices under 0000:af:00.1: cvl_0_1 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.395 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:34.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:15:34.396 00:15:34.396 --- 10.0.0.2 ping statistics --- 00:15:34.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.396 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:15:34.396 00:15:34.396 --- 10.0.0.1 ping statistics --- 00:15:34.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.396 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:34.396 only one NIC for nvmf test 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.396 rmmod nvme_tcp 00:15:34.396 rmmod nvme_fabrics 00:15:34.396 rmmod nvme_keyring 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.396 15:53:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.933 00:15:36.933 real 0m9.006s 00:15:36.933 user 0m1.866s 00:15:36.933 sys 0m5.165s 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:36.933 15:53:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:36.933 ************************************ 00:15:36.933 END TEST nvmf_target_multipath 00:15:36.933 ************************************ 00:15:36.933 15:53:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:36.933 15:53:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:36.933 15:53:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:36.933 15:53:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.933 ************************************ 00:15:36.933 START TEST nvmf_zcopy 00:15:36.933 ************************************ 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:36.933 * Looking for test storage... 00:15:36.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:36.933 15:53:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.934 15:53:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:43.508 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:43.508 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.508 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:43.509 Found net devices under 0000:af:00.0: cvl_0_0 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:43.509 Found net devices under 0000:af:00.1: cvl_0_1 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.509 15:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.509 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:15:43.769 00:15:43.769 --- 10.0.0.2 ping statistics --- 00:15:43.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.769 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:15:43.769 00:15:43.769 --- 10.0.0.1 ping statistics --- 00:15:43.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.769 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3728135 00:15:43.769 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.770 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3728135 00:15:43.770 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3728135 ']' 00:15:43.770 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.770 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:43.770 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.770 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:43.770 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:43.770 [2024-05-15 15:53:42.205993] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:43.770 [2024-05-15 15:53:42.206041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.770 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.770 [2024-05-15 15:53:42.278652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.030 [2024-05-15 15:53:42.347342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.030 [2024-05-15 15:53:42.347378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.030 [2024-05-15 15:53:42.347387] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.030 [2024-05-15 15:53:42.347395] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.030 [2024-05-15 15:53:42.347418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.030 [2024-05-15 15:53:42.347439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.599 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:44.599 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:15:44.599 15:53:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:44.599 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:44.599 15:53:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 [2024-05-15 15:53:43.045635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 [2024-05-15 15:53:43.061609] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:44.599 [2024-05-15 15:53:43.061839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 malloc0 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:44.599 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:44.599 { 00:15:44.599 "params": { 00:15:44.599 "name": "Nvme$subsystem", 00:15:44.599 "trtype": "$TEST_TRANSPORT", 00:15:44.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.600 "adrfam": "ipv4", 00:15:44.600 "trsvcid": "$NVMF_PORT", 00:15:44.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.600 "hdgst": ${hdgst:-false}, 00:15:44.600 "ddgst": ${ddgst:-false} 00:15:44.600 }, 00:15:44.600 "method": "bdev_nvme_attach_controller" 00:15:44.600 } 00:15:44.600 EOF 00:15:44.600 )") 00:15:44.600 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:44.600 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:44.600 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:44.600 15:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:44.600 "params": { 00:15:44.600 "name": "Nvme1", 00:15:44.600 "trtype": "tcp", 00:15:44.600 "traddr": "10.0.0.2", 00:15:44.600 "adrfam": "ipv4", 00:15:44.600 "trsvcid": "4420", 00:15:44.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.600 "hdgst": false, 00:15:44.600 "ddgst": false 00:15:44.600 }, 00:15:44.600 "method": "bdev_nvme_attach_controller" 00:15:44.600 }' 00:15:44.600 [2024-05-15 15:53:43.140849] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:44.600 [2024-05-15 15:53:43.140898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728302 ] 00:15:44.860 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.860 [2024-05-15 15:53:43.209896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.860 [2024-05-15 15:53:43.283956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.118 Running I/O for 10 seconds... 00:15:55.163 00:15:55.163 Latency(us) 00:15:55.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:55.163 Verification LBA range: start 0x0 length 0x1000 00:15:55.163 Nvme1n1 : 10.01 8711.24 68.06 0.00 0.00 14652.28 983.04 46137.34 00:15:55.163 =================================================================================================================== 00:15:55.163 Total : 8711.24 68.06 0.00 0.00 14652.28 983.04 46137.34 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3730024 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:55.163 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:55.163 { 00:15:55.163 "params": { 00:15:55.164 "name": "Nvme$subsystem", 00:15:55.164 "trtype": "$TEST_TRANSPORT", 00:15:55.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:55.164 "adrfam": "ipv4", 00:15:55.164 "trsvcid": "$NVMF_PORT", 00:15:55.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:55.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:55.164 "hdgst": ${hdgst:-false}, 00:15:55.164 "ddgst": ${ddgst:-false} 00:15:55.164 }, 00:15:55.164 "method": "bdev_nvme_attach_controller" 00:15:55.164 } 00:15:55.164 EOF 00:15:55.164 )") 00:15:55.164 [2024-05-15 15:53:53.720090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.164 [2024-05-15 15:53:53.720123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.164 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:55.423 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:55.423 [2024-05-15 15:53:53.728074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.423 [2024-05-15 15:53:53.728089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.423 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:55.423 15:53:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:55.423 "params": { 00:15:55.423 "name": "Nvme1", 00:15:55.423 "trtype": "tcp", 00:15:55.423 "traddr": "10.0.0.2", 00:15:55.423 "adrfam": "ipv4", 00:15:55.423 "trsvcid": "4420", 00:15:55.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:55.423 "hdgst": false, 00:15:55.423 "ddgst": false 00:15:55.423 }, 00:15:55.423 "method": "bdev_nvme_attach_controller" 00:15:55.423 }' 00:15:55.423 [2024-05-15 15:53:53.736091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.423 [2024-05-15 15:53:53.736104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.423 [2024-05-15 15:53:53.744111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.423 [2024-05-15 15:53:53.744122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.423 [2024-05-15 15:53:53.752140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.423 [2024-05-15 15:53:53.752155] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.423 [2024-05-15 15:53:53.760155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.423 [2024-05-15 15:53:53.760165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.423 [2024-05-15 15:53:53.761678] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:55.423 [2024-05-15 15:53:53.761721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730024 ] 00:15:55.423 [2024-05-15 15:53:53.768175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.423 [2024-05-15 15:53:53.768186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.423 [2024-05-15 15:53:53.776202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.776214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.784222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.784234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.792243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.792254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.424 [2024-05-15 15:53:53.800259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.800271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.808281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.808293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.816302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.816314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.824323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.824334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.830900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.424 [2024-05-15 15:53:53.832344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.832355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.840367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.840382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.848388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.848400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.856410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.856421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.864429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.864441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.872455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.872476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.880472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.880483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.888493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.888504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.896514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.896525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.901738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.424 [2024-05-15 15:53:53.904536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.904549] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.912564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.912581] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.920586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.920603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.928607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.928625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.936623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.936637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.944643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.944656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.952664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.952676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.960687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.960702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.968705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.968717] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.976725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.976737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.424 [2024-05-15 15:53:53.984748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.424 [2024-05-15 15:53:53.984759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.684 [2024-05-15 15:53:53.992791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.684 [2024-05-15 15:53:53.992812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.684 [2024-05-15 15:53:54.000797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.684 [2024-05-15 15:53:54.000813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.684 [2024-05-15 15:53:54.008817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.684 [2024-05-15 15:53:54.008833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.684 [2024-05-15 15:53:54.016861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.684 [2024-05-15 15:53:54.016877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.684 [2024-05-15 15:53:54.024863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.684 [2024-05-15 15:53:54.024878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.684 [2024-05-15 15:53:54.032886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.684 [2024-05-15 15:53:54.032901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.040907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.040920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.048927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.048938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.056954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.056972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.064972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.064983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 Running I/O for 5 seconds... 00:15:55.685 [2024-05-15 15:53:54.088308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.088329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.099804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.099823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.106881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.106900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.117404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.117424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.126069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.126089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.133733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.133752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.144075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.144095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.152748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.152767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.160905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.160929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.169580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.169598] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.177947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.177966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.186406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.186424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.195146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.195165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.204360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.204380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.213396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.213415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.222388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.222407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.231220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.231240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.685 [2024-05-15 15:53:54.239713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.685 [2024-05-15 15:53:54.239732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.248202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.248221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.257005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.257026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.265702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.265721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.274507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.274526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.282523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.282543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.291279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.291298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.299950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.299969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.308234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.308253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.317010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.317029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.325656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.325675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.334260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.334280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.342828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.342846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.351446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.351464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.359588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.359606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.367830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.367848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.376568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.376587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.387258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.387277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.397164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.397182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.405606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.405624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.415671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.415689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.426028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.426048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.434741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.434759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.445820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.945 [2024-05-15 15:53:54.445839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.945 [2024-05-15 15:53:54.452394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.946 [2024-05-15 15:53:54.452412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.946 [2024-05-15 15:53:54.463185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.946 [2024-05-15 15:53:54.463213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.946 [2024-05-15 15:53:54.472646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.946 [2024-05-15 15:53:54.472665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.946 [2024-05-15 15:53:54.481009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.946 [2024-05-15 15:53:54.481028] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.946 [2024-05-15 15:53:54.489401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.946 [2024-05-15 15:53:54.489426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.946 [2024-05-15 15:53:54.498117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.946 [2024-05-15 15:53:54.498135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.946 [2024-05-15 15:53:54.507113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.946 [2024-05-15 15:53:54.507131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.515762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.515781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.523679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.523698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.533140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.533158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.540546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.540564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.551347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.551366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.558262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.558280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.567898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.567917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.576509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.576527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.585308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.585326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.593749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.593767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.602008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.602027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.609933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.609952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.619137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.619156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.628153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.628172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.636425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.636444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.644825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.644842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.653944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.653966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.663180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.663203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.671805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.671823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.680909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.680928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.206 [2024-05-15 15:53:54.689867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.206 [2024-05-15 15:53:54.689885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.700999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.701017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.709943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.709962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.719971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.719990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.726738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.726756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.737425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.737444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.743975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.743993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.754642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.754661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.207 [2024-05-15 15:53:54.762944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.207 [2024-05-15 15:53:54.762962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.771905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.771924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.780822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.780841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.792152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.792170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.800992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.801010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.808629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.808647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.818366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.818385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.826847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.826870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.833708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.833726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.844218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.844237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.852538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.852557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.860910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.860929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.869741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.869759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.878414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.878433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.886567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.886585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.894785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.894803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.903549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.903566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.911633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.911652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.920313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.920331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.928857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.928880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.937765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.937783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.945868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.945889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.954357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.954376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.963295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.963314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.970213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.970232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.981059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.981078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.989836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.989858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:54.998363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:54.998381] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:55.007352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:55.007371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:55.016121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:55.016140] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.467 [2024-05-15 15:53:55.027062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.467 [2024-05-15 15:53:55.027081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.037446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.037466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.046139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.046157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.054916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.054934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.065590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.065607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.076764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.076781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.085537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.085555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.095321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.095339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.102707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.102725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.110733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.110752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.119702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.119721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.126371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.126405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.136686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.136704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.144708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.144727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.153007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.153025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.163479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.163498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.171923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.171941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.181984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.182002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.190966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.190986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.199710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.199730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.208517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.208536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.217433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.217452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.230029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.230047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.727 [2024-05-15 15:53:55.240559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.727 [2024-05-15 15:53:55.240577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.728 [2024-05-15 15:53:55.248051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.728 [2024-05-15 15:53:55.248069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.728 [2024-05-15 15:53:55.255420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.728 [2024-05-15 15:53:55.255438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.728 [2024-05-15 15:53:55.264704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.728 [2024-05-15 15:53:55.264723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.728 [2024-05-15 15:53:55.273501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.728 [2024-05-15 15:53:55.273520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.728 [2024-05-15 15:53:55.281593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.728 [2024-05-15 15:53:55.281613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.728 [2024-05-15 15:53:55.289882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.728 [2024-05-15 15:53:55.289901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.298123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.298141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.306719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.306738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.314523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.314542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.323729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.323748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.331994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.332012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.340924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.340943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.347575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.347594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.358382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.358403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.366800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.366821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.375566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.375587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.383978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.383999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.392940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.392959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.401712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.401731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.410086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.410107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.418293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.418312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.426705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.426725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.435612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.435631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.444485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.444503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.452745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.452763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.461695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.461714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.469897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.469916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.478557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.478577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.487545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.487565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.496344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.496364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.504780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.504799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.513715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.513735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.522713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.522732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.531566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.531585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.539926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.539945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.988 [2024-05-15 15:53:55.548779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.988 [2024-05-15 15:53:55.548798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.556911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.556929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.565693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.565711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.574089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.574108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.582741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.582760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.591330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.591348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.599830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.599848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.608559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.608577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.616928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.616947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.625899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.625918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.635213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.635232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.644024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.644043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.652376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.652396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.660682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.660702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.669424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.669442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.248 [2024-05-15 15:53:55.678106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.248 [2024-05-15 15:53:55.678125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.686503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.686521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.695629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.695648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.704672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.704692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.713452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.713470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.722527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.722546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.730740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.730759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.739599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.739619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.749010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.749029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.757442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.757462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.766182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.766209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.775411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.775430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.783936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.783955] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.791861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.791880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.800776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.800794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.249 [2024-05-15 15:53:55.809804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.249 [2024-05-15 15:53:55.809824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.818325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.818349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.826780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.826800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.835117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.835137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.843887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.843907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.852263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.852283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.861060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.861079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.869696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.869715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.878672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.878692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.887559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.887578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.896155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.896174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.904615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.904635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.912930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.912949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.921859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.921879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.930309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.930328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.939118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.939137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.947456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.947474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.956368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.956386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.965570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.965588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.974968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.974986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.984020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.984043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:55.992884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:55.992902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.001838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.001857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.010536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.010555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.019430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.019449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.027675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.027694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.034557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.034575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.044367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.044387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.053199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.053220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.509 [2024-05-15 15:53:56.061760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.509 [2024-05-15 15:53:56.061779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.510 [2024-05-15 15:53:56.070757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.510 [2024-05-15 15:53:56.070776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.079222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.079240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.086851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.086870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.096452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.096472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.104021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.104039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.113533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.113551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.122298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.122316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.131694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.131713] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.140610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.140629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.148931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.148954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.157879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.157898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.166672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.166690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.175124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.175144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.184092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.184110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.192300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.192318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.201210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.201228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.210007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.210025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.220735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.220753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.231012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.231031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.242813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.242832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.252130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.252148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.263463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.263482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.770 [2024-05-15 15:53:56.272553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.770 [2024-05-15 15:53:56.272571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.771 [2024-05-15 15:53:56.283116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.771 [2024-05-15 15:53:56.283135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.771 [2024-05-15 15:53:56.293816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.771 [2024-05-15 15:53:56.293835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.771 [2024-05-15 15:53:56.302767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.771 [2024-05-15 15:53:56.302787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.771 [2024-05-15 15:53:56.310922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.771 [2024-05-15 15:53:56.310942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.771 [2024-05-15 15:53:56.319941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.771 [2024-05-15 15:53:56.319960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.771 [2024-05-15 15:53:56.328166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.771 [2024-05-15 15:53:56.328189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.336455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.336474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.345218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.345237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.354804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.354823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.364665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.364684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.373764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.373783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.382422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.382440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.389443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.389461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.400109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.400130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.408590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.408609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.415238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.415257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.426104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.426124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.434921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.434940] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.443653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.443673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.452906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.452925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.461961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.461980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.469213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.469232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.479098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.479117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.487460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.487478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.496776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.496795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.505378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.505396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.513708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.513727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.522529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.522548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.531587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.531606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.540185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.540208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.548712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.548731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.557442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.557461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.565973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.565993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.574497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.574515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.582673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.582692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.031 [2024-05-15 15:53:56.590979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.031 [2024-05-15 15:53:56.590998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.599293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.599311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.607443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.607461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.616135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.616153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.625738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.625756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.635359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.635378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.644014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.644034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.651120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.651138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.662344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.662363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.670499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.670518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.679266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.679284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.688044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.688063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.699151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.699169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.709214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.709233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.716643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.716662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.725911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.725930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.734940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.734959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.741548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.741567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.752011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.752030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.760237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.760256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.768950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.768968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.780470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.780489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.789196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.789216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.796767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.796785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.807444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.807463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.815543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.815561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.824841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.824859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.831502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.831520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.841026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.841044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.292 [2024-05-15 15:53:56.849728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.292 [2024-05-15 15:53:56.849748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.859008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.859027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.867935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.867954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.876642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.876661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.885576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.885594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.894007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.894026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.902489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.902508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.911541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.911560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.919625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.919646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.928747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.928766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.937366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.937386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.946490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.946510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.955433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.955451] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.963816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.963837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.972925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.972944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.981778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.981798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.990323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.990342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:56.998778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:56.998797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.007146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.007165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.015985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.016004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.024780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.024799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.033176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.033202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.040375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.040395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.050227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.050247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.058859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.058877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.068147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.068167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.076738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.076758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.085179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.085216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.093971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.093990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.102686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.102704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.553 [2024-05-15 15:53:57.111303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.553 [2024-05-15 15:53:57.111322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.119709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.119728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.128128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.128147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.136417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.136437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.145267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.145286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.153657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.153680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.162493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.162512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.171311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.171329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.180266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.180285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.188816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.188833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.197942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.197962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.206255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.206275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.214339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.214358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.223691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.223710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.232167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.232186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.241366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.241384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.249872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.249891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.258259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.258278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.267560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.267579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.276764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.276783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.284949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.284968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.293642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.293661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.302810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.302830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.311729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.311749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.320333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.320356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.329289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.329308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.338283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.338302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.346642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.346661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.355025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.813 [2024-05-15 15:53:57.355044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.813 [2024-05-15 15:53:57.363457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.814 [2024-05-15 15:53:57.363476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.814 [2024-05-15 15:53:57.372061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.814 [2024-05-15 15:53:57.372080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.380875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.380895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.389259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.389277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.397725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.397745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.406173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.406198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.414542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.414561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.422989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.423007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.431589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.431607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.440365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.440384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.449177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.449206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.458056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.458076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.466565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.466584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.474062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.474081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.484643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.484665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.493368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.493387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.502185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.502211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.510543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.510562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.519473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.519491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.527823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.527841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.536620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.536639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.545588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.073 [2024-05-15 15:53:57.545607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.073 [2024-05-15 15:53:57.554060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.554079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.562812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.562830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.571525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.571544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.580379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.580398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.588948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.588966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.597287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.597306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.605398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.605416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.614657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.614675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.623292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.623310] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.074 [2024-05-15 15:53:57.632032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.074 [2024-05-15 15:53:57.632051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.333 [2024-05-15 15:53:57.640411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.333 [2024-05-15 15:53:57.640430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.333 [2024-05-15 15:53:57.649430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.333 [2024-05-15 15:53:57.649452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.333 [2024-05-15 15:53:57.657846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.333 [2024-05-15 15:53:57.657865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.333 [2024-05-15 15:53:57.667030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.333 [2024-05-15 15:53:57.667050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.333 [2024-05-15 15:53:57.675639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.333 [2024-05-15 15:53:57.675658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.689739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.689758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.698111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.698129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.706370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.706389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.714571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.714590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.722862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.722881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.731974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.731992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.741041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.741059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.749566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.749585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.758177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.758203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.766261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.766279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.774931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.774949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.784116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.784135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.792462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.792480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.800564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.800582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.809533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.809552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.818026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.818048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.826973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.826991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.835845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.835863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.844135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.844153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.852293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.852311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.860776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.860795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.869635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.869653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.878046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.878065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.886593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.886612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.334 [2024-05-15 15:53:57.895068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.334 [2024-05-15 15:53:57.895087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.903827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.903846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.912920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.912939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.922012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.922030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.930292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.930311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.938678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.938698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.947088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.947107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.955490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.955509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.963884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.963902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.972595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.972613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.981132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.981151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.989229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.989247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:57.997834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:57.997853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:58.007412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:58.007431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:58.018301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:58.018321] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:58.026795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:58.026814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.594 [2024-05-15 15:53:58.035818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.594 [2024-05-15 15:53:58.035836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.044324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.044343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.052600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.052618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.061714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.061732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.069897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.069916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.080549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.080568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.090686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.090705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.099701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.099720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.108028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.108047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.115448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.115466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.125141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.125160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.133852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.133871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.142376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.142395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.595 [2024-05-15 15:53:58.150680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.595 [2024-05-15 15:53:58.150698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.161372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.161390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.170768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.170786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.179360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.179378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.189659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.189677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.199479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.199497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.210347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.210365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.216886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.216905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.229809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.229828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.239320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.239349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.247109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.247128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.255062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.255081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.264248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.264266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.272693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.272711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.281758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.281776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.290746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.290764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.301089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.301106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.309713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.309730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.320571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.320589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.327651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.327669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.337942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.337961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.346841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.346860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.355396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.355415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.361978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.361996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.371845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.371865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.380612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.380630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.389641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.389660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.398453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.398471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.406819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.406838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.855 [2024-05-15 15:53:58.415066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.855 [2024-05-15 15:53:58.415084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.421791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.421810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.432450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.432468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.441330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.441359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.450000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.450018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.458355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.458375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.467088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.467108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.475520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.475539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.484483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.484507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.492798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.492817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.501080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.501099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.510400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.510419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.519063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.519084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.528071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.528090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.536944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.536975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.115 [2024-05-15 15:53:58.546357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.115 [2024-05-15 15:53:58.546376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.555375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.555397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.564212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.564231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.573149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.573168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.581768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.581787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.590688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.590707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.599231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.599250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.607230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.607249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.615947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.615966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.624500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.624519] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.633040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.633059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.642118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.642137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.650424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.650447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.659568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.659588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.668036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.668055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.116 [2024-05-15 15:53:58.676170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.116 [2024-05-15 15:53:58.676189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.684262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.684283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.693105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.693125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.701698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.701718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.710521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.710541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.718904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.718923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.727543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.727563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.736034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.736053] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.744325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.744354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.752852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.752872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.761080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.761099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.770489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.770508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.779550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.779569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.788322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.788341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.796464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.796482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.805360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.805379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.813979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.814002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.822363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.822382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.831215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.831234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.839641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.839659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.847688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.847706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.856546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.856566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.865558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.865578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.873814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.873833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.882823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.882842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.891283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.891301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.900490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.900509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.908696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.908715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.917467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.917486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.925784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.925803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.375 [2024-05-15 15:53:58.933741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.375 [2024-05-15 15:53:58.933760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:58.942633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:58.942651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:58.951484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:58.951503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:58.960312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:58.960331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:58.969443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:58.969462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:58.977968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:58.977993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:58.986295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:58.986314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:58.994808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:58.994827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.003852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.003871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.012947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.012967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.022188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.022215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.030658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.030679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.039370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.039390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.047963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.047982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.056409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.056428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.064767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.064786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.073263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.073281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.080971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.080990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 00:16:00.636 Latency(us) 00:16:00.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.636 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:00.636 Nvme1n1 : 5.01 16675.41 130.28 0.00 0.00 7668.02 2385.51 28730.98 00:16:00.636 =================================================================================================================== 00:16:00.636 Total : 16675.41 130.28 0.00 0.00 7668.02 2385.51 28730.98 00:16:00.636 [2024-05-15 15:53:59.087660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.087677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.095680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.095695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.103698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.103708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.111726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.111742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.119750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.119770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.127764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.127777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.135785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.135798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.143806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.143820] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.151828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.151842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.159849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.159862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.167871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.167885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.175892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.175906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.183913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.183927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.636 [2024-05-15 15:53:59.191931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.636 [2024-05-15 15:53:59.191942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.896 [2024-05-15 15:53:59.199953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.896 [2024-05-15 15:53:59.199963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.207974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.207983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.215996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.216008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.224016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.224029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.232037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.232048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.240058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.240068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.248079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.248089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.256100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.256111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.264123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.264135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.272143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.272153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 [2024-05-15 15:53:59.280164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.897 [2024-05-15 15:53:59.280174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3730024) - No such process 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3730024 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:00.897 delay0 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.897 15:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:00.897 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.897 [2024-05-15 15:53:59.373655] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:07.464 Initializing NVMe Controllers 00:16:07.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:07.464 Initialization complete. Launching workers. 00:16:07.464 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 94 00:16:07.464 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 371, failed to submit 43 00:16:07.464 success 196, unsuccess 175, failed 0 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.464 rmmod nvme_tcp 00:16:07.464 rmmod nvme_fabrics 00:16:07.464 rmmod nvme_keyring 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3728135 ']' 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3728135 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3728135 ']' 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3728135 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3728135 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3728135' 00:16:07.464 killing process with pid 3728135 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3728135 00:16:07.464 [2024-05-15 15:54:05.627203] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3728135 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.464 15:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.372 15:54:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:09.372 00:16:09.372 real 0m32.864s 00:16:09.372 user 0m41.323s 00:16:09.372 sys 0m13.259s 00:16:09.372 15:54:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:09.372 15:54:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.372 ************************************ 00:16:09.372 END TEST nvmf_zcopy 00:16:09.372 ************************************ 00:16:09.632 15:54:07 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:09.632 15:54:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:09.632 15:54:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:09.632 15:54:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.632 ************************************ 00:16:09.632 START TEST nvmf_nmic 00:16:09.632 ************************************ 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:09.632 * Looking for test storage... 00:16:09.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.632 15:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.633 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:09.633 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:09.633 15:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.633 15:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:16.281 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:16.281 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:16.281 Found net devices under 0000:af:00.0: cvl_0_0 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:16.281 Found net devices under 0000:af:00.1: cvl_0_1 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.281 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.282 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.282 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.282 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.282 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.282 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.282 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:16:16.541 00:16:16.541 --- 10.0.0.2 ping statistics --- 00:16:16.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.541 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:16:16.541 00:16:16.541 --- 10.0.0.1 ping statistics --- 00:16:16.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.541 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.541 15:54:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.541 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:16.541 15:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:16.541 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:16.541 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.541 15:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3736384 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3736384 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3736384 ']' 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:16.542 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.542 [2024-05-15 15:54:15.066463] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:16.542 [2024-05-15 15:54:15.066508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.542 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.801 [2024-05-15 15:54:15.141303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.801 [2024-05-15 15:54:15.212157] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.801 [2024-05-15 15:54:15.212208] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.801 [2024-05-15 15:54:15.212218] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.801 [2024-05-15 15:54:15.212229] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.801 [2024-05-15 15:54:15.212237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.801 [2024-05-15 15:54:15.212291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.801 [2024-05-15 15:54:15.212405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.801 [2024-05-15 15:54:15.212489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.801 [2024-05-15 15:54:15.212491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.370 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.370 [2024-05-15 15:54:15.927983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 Malloc0 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 [2024-05-15 15:54:15.982311] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:17.631 [2024-05-15 15:54:15.982582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:17.631 test case1: single bdev can't be used in multiple subsystems 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 [2024-05-15 15:54:16.006415] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:17.631 [2024-05-15 15:54:16.006435] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:17.631 [2024-05-15 15:54:16.006444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.631 request: 00:16:17.631 { 00:16:17.631 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:17.631 "namespace": { 00:16:17.631 "bdev_name": "Malloc0", 00:16:17.631 "no_auto_visible": false 00:16:17.631 }, 00:16:17.631 "method": "nvmf_subsystem_add_ns", 00:16:17.631 "req_id": 1 00:16:17.631 } 00:16:17.631 Got JSON-RPC error response 00:16:17.631 response: 00:16:17.631 { 00:16:17.631 "code": -32602, 00:16:17.631 "message": "Invalid parameters" 00:16:17.631 } 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:17.631 Adding namespace failed - expected result. 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:17.631 test case2: host connect to nvmf target in multiple paths 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 [2024-05-15 15:54:16.022583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.631 15:54:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.011 15:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:20.391 15:54:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.391 15:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:16:20.391 15:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.391 15:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:20.391 15:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:16:22.298 15:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:22.298 15:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:22.298 15:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.298 15:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:22.298 15:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.298 15:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:16:22.298 15:54:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:22.298 [global] 00:16:22.298 thread=1 00:16:22.298 invalidate=1 00:16:22.298 rw=write 00:16:22.298 time_based=1 00:16:22.298 runtime=1 00:16:22.298 ioengine=libaio 00:16:22.298 direct=1 00:16:22.298 bs=4096 00:16:22.298 iodepth=1 00:16:22.298 norandommap=0 00:16:22.298 numjobs=1 00:16:22.298 00:16:22.298 verify_dump=1 00:16:22.298 verify_backlog=512 00:16:22.298 verify_state_save=0 00:16:22.298 do_verify=1 00:16:22.298 verify=crc32c-intel 00:16:22.298 [job0] 00:16:22.298 filename=/dev/nvme0n1 00:16:22.298 Could not set queue depth (nvme0n1) 00:16:22.558 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.558 fio-3.35 00:16:22.558 Starting 1 thread 00:16:23.937 00:16:23.937 job0: (groupid=0, jobs=1): err= 0: pid=3737625: Wed May 15 15:54:22 2024 00:16:23.937 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:16:23.937 slat (nsec): min=11701, max=30907, avg=24876.55, stdev=3410.82 00:16:23.937 clat (usec): min=41082, max=43887, avg=42000.15, stdev=492.21 00:16:23.937 lat (usec): min=41108, max=43918, avg=42025.02, stdev=493.75 00:16:23.937 clat percentiles (usec): 00:16:23.937 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:23.937 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:23.937 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:23.937 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:16:23.937 | 99.99th=[43779] 00:16:23.937 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:16:23.937 slat (usec): min=12, max=28138, avg=68.49, stdev=1242.98 00:16:23.937 clat (usec): min=197, max=734, avg=259.37, stdev=80.32 00:16:23.937 lat (usec): min=219, max=28873, avg=327.86, stdev=1266.44 00:16:23.937 clat percentiles (usec): 00:16:23.937 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:16:23.937 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:16:23.937 | 70.00th=[ 239], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[ 453], 00:16:23.937 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 734], 99.95th=[ 734], 00:16:23.937 | 99.99th=[ 734] 00:16:23.937 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:23.937 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:23.937 lat (usec) : 250=69.55%, 500=22.93%, 750=3.76% 00:16:23.937 lat (msec) : 50=3.76% 00:16:23.937 cpu : usr=0.59%, sys=0.89%, ctx=535, majf=0, minf=2 00:16:23.937 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.937 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.937 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.937 00:16:23.937 Run status group 0 (all jobs): 00:16:23.937 READ: bw=79.2KiB/s (81.1kB/s), 79.2KiB/s-79.2KiB/s (81.1kB/s-81.1kB/s), io=80.0KiB (81.9kB), run=1010-1010msec 00:16:23.937 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:16:23.937 00:16:23.937 Disk stats (read/write): 00:16:23.937 nvme0n1: ios=42/512, merge=0/0, ticks=1684/131, in_queue=1815, util=99.00% 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:23.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.937 rmmod nvme_tcp 00:16:23.937 rmmod nvme_fabrics 00:16:23.937 rmmod nvme_keyring 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3736384 ']' 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3736384 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3736384 ']' 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3736384 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:23.937 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3736384 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3736384' 00:16:24.197 killing process with pid 3736384 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3736384 00:16:24.197 [2024-05-15 15:54:22.520997] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3736384 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.197 15:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.739 15:54:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.739 00:16:26.739 real 0m16.805s 00:16:26.739 user 0m39.212s 00:16:26.739 sys 0m6.304s 00:16:26.739 15:54:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:26.739 15:54:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 ************************************ 00:16:26.739 END TEST nvmf_nmic 00:16:26.739 ************************************ 00:16:26.739 15:54:24 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:26.739 15:54:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:26.739 15:54:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.739 15:54:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 ************************************ 00:16:26.739 START TEST nvmf_fio_target 00:16:26.739 ************************************ 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:26.739 * Looking for test storage... 00:16:26.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.739 15:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.739 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:26.739 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:26.739 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.739 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.739 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.740 15:54:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.310 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:33.311 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:33.311 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:33.311 Found net devices under 0000:af:00.0: cvl_0_0 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:33.311 Found net devices under 0000:af:00.1: cvl_0_1 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.311 15:54:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:16:33.311 00:16:33.311 --- 10.0.0.2 ping statistics --- 00:16:33.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.311 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:16:33.311 00:16:33.311 --- 10.0.0.1 ping statistics --- 00:16:33.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.311 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3741328 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3741328 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3741328 ']' 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:33.311 15:54:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.311 [2024-05-15 15:54:31.370538] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:33.311 [2024-05-15 15:54:31.370587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.311 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.311 [2024-05-15 15:54:31.446576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.311 [2024-05-15 15:54:31.522116] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.311 [2024-05-15 15:54:31.522152] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.311 [2024-05-15 15:54:31.522161] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.312 [2024-05-15 15:54:31.522170] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.312 [2024-05-15 15:54:31.522177] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.312 [2024-05-15 15:54:31.522229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.312 [2024-05-15 15:54:31.522357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.312 [2024-05-15 15:54:31.522442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.312 [2024-05-15 15:54:31.522444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:33.880 [2024-05-15 15:54:32.371546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.880 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.139 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:34.139 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.398 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:34.398 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.656 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:34.656 15:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.656 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:34.656 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:34.914 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.172 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:35.172 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.429 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:35.429 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.429 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:35.429 15:54:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:35.687 15:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:35.946 15:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:35.946 15:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.946 15:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:35.946 15:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.205 15:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.489 [2024-05-15 15:54:34.839700] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:36.489 [2024-05-15 15:54:34.839969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.489 15:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:36.747 15:54:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:36.747 15:54:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.125 15:54:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:38.125 15:54:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:16:38.125 15:54:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.125 15:54:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:38.125 15:54:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:38.125 15:54:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:16:40.659 15:54:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:40.659 15:54:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:40.660 15:54:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.660 15:54:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:40.660 15:54:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.660 15:54:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:16:40.660 15:54:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:40.660 [global] 00:16:40.660 thread=1 00:16:40.660 invalidate=1 00:16:40.660 rw=write 00:16:40.660 time_based=1 00:16:40.660 runtime=1 00:16:40.660 ioengine=libaio 00:16:40.660 direct=1 00:16:40.660 bs=4096 00:16:40.660 iodepth=1 00:16:40.660 norandommap=0 00:16:40.660 numjobs=1 00:16:40.660 00:16:40.660 verify_dump=1 00:16:40.660 verify_backlog=512 00:16:40.660 verify_state_save=0 00:16:40.660 do_verify=1 00:16:40.660 verify=crc32c-intel 00:16:40.660 [job0] 00:16:40.660 filename=/dev/nvme0n1 00:16:40.660 [job1] 00:16:40.660 filename=/dev/nvme0n2 00:16:40.660 [job2] 00:16:40.660 filename=/dev/nvme0n3 00:16:40.660 [job3] 00:16:40.660 filename=/dev/nvme0n4 00:16:40.660 Could not set queue depth (nvme0n1) 00:16:40.660 Could not set queue depth (nvme0n2) 00:16:40.660 Could not set queue depth (nvme0n3) 00:16:40.660 Could not set queue depth (nvme0n4) 00:16:40.660 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.660 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.660 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.660 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.660 fio-3.35 00:16:40.660 Starting 4 threads 00:16:42.051 00:16:42.051 job0: (groupid=0, jobs=1): err= 0: pid=3742876: Wed May 15 15:54:40 2024 00:16:42.051 read: IOPS=1079, BW=4320KiB/s (4423kB/s)(4324KiB/1001msec) 00:16:42.051 slat (nsec): min=9203, max=40462, avg=10224.43, stdev=1429.14 00:16:42.051 clat (usec): min=412, max=1126, avg=465.03, stdev=29.88 00:16:42.051 lat (usec): min=422, max=1138, avg=475.25, stdev=29.96 00:16:42.051 clat percentiles (usec): 00:16:42.051 | 1.00th=[ 429], 5.00th=[ 437], 10.00th=[ 441], 20.00th=[ 449], 00:16:42.051 | 30.00th=[ 453], 40.00th=[ 457], 50.00th=[ 461], 60.00th=[ 465], 00:16:42.051 | 70.00th=[ 474], 80.00th=[ 478], 90.00th=[ 490], 95.00th=[ 498], 00:16:42.051 | 99.00th=[ 529], 99.50th=[ 594], 99.90th=[ 693], 99.95th=[ 1123], 00:16:42.051 | 99.99th=[ 1123] 00:16:42.051 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:42.051 slat (usec): min=13, max=41604, avg=68.41, stdev=1498.32 00:16:42.051 clat (usec): min=205, max=369, avg=239.89, stdev=14.72 00:16:42.051 lat (usec): min=218, max=41967, avg=308.29, stdev=1502.95 00:16:42.051 clat percentiles (usec): 00:16:42.051 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 231], 00:16:42.051 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 241], 00:16:42.051 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:16:42.051 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 363], 99.95th=[ 371], 00:16:42.051 | 99.99th=[ 371] 00:16:42.051 bw ( KiB/s): min= 5192, max= 5192, per=32.84%, avg=5192.00, stdev= 0.00, samples=1 00:16:42.051 iops : min= 1298, max= 1298, avg=1298.00, stdev= 0.00, samples=1 00:16:42.051 lat (usec) : 250=49.60%, 500=48.57%, 750=1.80% 00:16:42.051 lat (msec) : 2=0.04% 00:16:42.051 cpu : usr=3.40%, sys=4.20%, ctx=2620, majf=0, minf=1 00:16:42.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.051 issued rwts: total=1081,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.051 job1: (groupid=0, jobs=1): err= 0: pid=3742877: Wed May 15 15:54:40 2024 00:16:42.051 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:42.051 slat (nsec): min=8797, max=22217, avg=9514.92, stdev=907.65 00:16:42.051 clat (usec): min=328, max=2305, avg=519.73, stdev=78.27 00:16:42.051 lat (usec): min=337, max=2314, avg=529.25, stdev=78.32 00:16:42.051 clat percentiles (usec): 00:16:42.051 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 429], 20.00th=[ 494], 00:16:42.052 | 30.00th=[ 510], 40.00th=[ 523], 50.00th=[ 537], 60.00th=[ 537], 00:16:42.052 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 578], 00:16:42.052 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 734], 99.95th=[ 2311], 00:16:42.052 | 99.99th=[ 2311] 00:16:42.052 write: IOPS=1517, BW=6070KiB/s (6216kB/s)(6076KiB/1001msec); 0 zone resets 00:16:42.052 slat (nsec): min=12082, max=46990, avg=13959.46, stdev=2203.54 00:16:42.052 clat (usec): min=189, max=708, avg=282.69, stdev=92.78 00:16:42.052 lat (usec): min=202, max=721, avg=296.64, stdev=93.34 00:16:42.052 clat percentiles (usec): 00:16:42.052 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:16:42.052 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 265], 00:16:42.052 | 70.00th=[ 302], 80.00th=[ 355], 90.00th=[ 424], 95.00th=[ 519], 00:16:42.052 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 701], 99.95th=[ 709], 00:16:42.052 | 99.99th=[ 709] 00:16:42.052 bw ( KiB/s): min= 4848, max= 4848, per=30.66%, avg=4848.00, stdev= 0.00, samples=1 00:16:42.052 iops : min= 1212, max= 1212, avg=1212.00, stdev= 0.00, samples=1 00:16:42.052 lat (usec) : 250=33.50%, 500=32.56%, 750=33.90% 00:16:42.052 lat (msec) : 4=0.04% 00:16:42.052 cpu : usr=3.20%, sys=3.80%, ctx=2544, majf=0, minf=1 00:16:42.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.052 issued rwts: total=1024,1519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.052 job2: (groupid=0, jobs=1): err= 0: pid=3742878: Wed May 15 15:54:40 2024 00:16:42.052 read: IOPS=323, BW=1295KiB/s (1326kB/s)(1336KiB/1032msec) 00:16:42.052 slat (nsec): min=8309, max=26894, avg=9894.53, stdev=3589.87 00:16:42.052 clat (usec): min=461, max=43015, avg=2652.40, stdev=9110.99 00:16:42.052 lat (usec): min=469, max=43041, avg=2662.30, stdev=9114.15 00:16:42.052 clat percentiles (usec): 00:16:42.052 | 1.00th=[ 465], 5.00th=[ 474], 10.00th=[ 490], 20.00th=[ 506], 00:16:42.052 | 30.00th=[ 515], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 537], 00:16:42.052 | 70.00th=[ 545], 80.00th=[ 603], 90.00th=[ 668], 95.00th=[41157], 00:16:42.052 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:16:42.052 | 99.99th=[43254] 00:16:42.052 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:16:42.052 slat (nsec): min=11432, max=73335, avg=12685.44, stdev=2960.03 00:16:42.052 clat (usec): min=198, max=646, avg=261.99, stdev=77.59 00:16:42.052 lat (usec): min=211, max=719, avg=274.67, stdev=78.28 00:16:42.052 clat percentiles (usec): 00:16:42.052 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:16:42.052 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:16:42.052 | 70.00th=[ 253], 80.00th=[ 281], 90.00th=[ 363], 95.00th=[ 469], 00:16:42.052 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 644], 99.95th=[ 644], 00:16:42.052 | 99.99th=[ 644] 00:16:42.052 bw ( KiB/s): min= 4096, max= 4096, per=25.91%, avg=4096.00, stdev= 0.00, samples=1 00:16:42.052 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:42.052 lat (usec) : 250=41.13%, 500=22.93%, 750=33.57% 00:16:42.052 lat (msec) : 2=0.35%, 50=2.01% 00:16:42.052 cpu : usr=0.58%, sys=0.87%, ctx=846, majf=0, minf=2 00:16:42.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.052 issued rwts: total=334,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.052 job3: (groupid=0, jobs=1): err= 0: pid=3742879: Wed May 15 15:54:40 2024 00:16:42.052 read: IOPS=22, BW=90.4KiB/s (92.5kB/s)(92.0KiB/1018msec) 00:16:42.052 slat (nsec): min=6699, max=25289, avg=14096.09, stdev=5194.59 00:16:42.052 clat (usec): min=784, max=41998, avg=36925.10, stdev=12419.56 00:16:42.052 lat (usec): min=794, max=42013, avg=36939.20, stdev=12420.27 00:16:42.052 clat percentiles (usec): 00:16:42.052 | 1.00th=[ 783], 5.00th=[ 1057], 10.00th=[17433], 20.00th=[41157], 00:16:42.052 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:16:42.052 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:42.052 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:42.052 | 99.99th=[42206] 00:16:42.052 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:16:42.052 slat (nsec): min=7605, max=47808, avg=12605.57, stdev=2647.44 00:16:42.052 clat (usec): min=196, max=720, avg=304.85, stdev=125.32 00:16:42.052 lat (usec): min=208, max=754, avg=317.45, stdev=126.00 00:16:42.052 clat percentiles (usec): 00:16:42.052 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:16:42.052 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 241], 00:16:42.052 | 70.00th=[ 351], 80.00th=[ 416], 90.00th=[ 529], 95.00th=[ 537], 00:16:42.052 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 717], 99.95th=[ 717], 00:16:42.052 | 99.99th=[ 717] 00:16:42.052 bw ( KiB/s): min= 4096, max= 4096, per=25.91%, avg=4096.00, stdev= 0.00, samples=1 00:16:42.052 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:42.052 lat (usec) : 250=60.93%, 500=18.32%, 750=16.45%, 1000=0.19% 00:16:42.052 lat (msec) : 2=0.19%, 20=0.19%, 50=3.74% 00:16:42.052 cpu : usr=0.20%, sys=0.79%, ctx=537, majf=0, minf=1 00:16:42.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.052 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.052 00:16:42.052 Run status group 0 (all jobs): 00:16:42.052 READ: bw=9543KiB/s (9772kB/s), 90.4KiB/s-4320KiB/s (92.5kB/s-4423kB/s), io=9848KiB (10.1MB), run=1001-1032msec 00:16:42.052 WRITE: bw=15.4MiB/s (16.2MB/s), 1984KiB/s-6138KiB/s (2032kB/s-6285kB/s), io=15.9MiB (16.7MB), run=1001-1032msec 00:16:42.052 00:16:42.052 Disk stats (read/write): 00:16:42.052 nvme0n1: ios=1009/1024, merge=0/0, ticks=1291/242, in_queue=1533, util=86.67% 00:16:42.052 nvme0n2: ios=958/1024, merge=0/0, ticks=1354/298, in_queue=1652, util=86.87% 00:16:42.052 nvme0n3: ios=386/512, merge=0/0, ticks=768/132, in_queue=900, util=93.06% 00:16:42.052 nvme0n4: ios=75/512, merge=0/0, ticks=779/146, in_queue=925, util=98.05% 00:16:42.052 15:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:42.052 [global] 00:16:42.052 thread=1 00:16:42.052 invalidate=1 00:16:42.052 rw=randwrite 00:16:42.052 time_based=1 00:16:42.052 runtime=1 00:16:42.052 ioengine=libaio 00:16:42.052 direct=1 00:16:42.052 bs=4096 00:16:42.052 iodepth=1 00:16:42.052 norandommap=0 00:16:42.052 numjobs=1 00:16:42.052 00:16:42.052 verify_dump=1 00:16:42.052 verify_backlog=512 00:16:42.052 verify_state_save=0 00:16:42.052 do_verify=1 00:16:42.052 verify=crc32c-intel 00:16:42.052 [job0] 00:16:42.052 filename=/dev/nvme0n1 00:16:42.052 [job1] 00:16:42.052 filename=/dev/nvme0n2 00:16:42.052 [job2] 00:16:42.052 filename=/dev/nvme0n3 00:16:42.052 [job3] 00:16:42.052 filename=/dev/nvme0n4 00:16:42.052 Could not set queue depth (nvme0n1) 00:16:42.052 Could not set queue depth (nvme0n2) 00:16:42.052 Could not set queue depth (nvme0n3) 00:16:42.052 Could not set queue depth (nvme0n4) 00:16:42.318 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.318 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.318 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.318 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.318 fio-3.35 00:16:42.318 Starting 4 threads 00:16:43.769 00:16:43.769 job0: (groupid=0, jobs=1): err= 0: pid=3743296: Wed May 15 15:54:41 2024 00:16:43.769 read: IOPS=68, BW=274KiB/s (281kB/s)(280KiB/1021msec) 00:16:43.769 slat (nsec): min=8806, max=28641, avg=12348.39, stdev=5441.86 00:16:43.769 clat (usec): min=435, max=42110, avg=12342.14, stdev=18855.55 00:16:43.769 lat (usec): min=444, max=42120, avg=12354.48, stdev=18859.94 00:16:43.769 clat percentiles (usec): 00:16:43.769 | 1.00th=[ 437], 5.00th=[ 445], 10.00th=[ 449], 20.00th=[ 461], 00:16:43.769 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 510], 00:16:43.769 | 70.00th=[ 963], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:43.769 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:43.769 | 99.99th=[42206] 00:16:43.769 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:16:43.770 slat (nsec): min=11789, max=72428, avg=13444.88, stdev=3387.12 00:16:43.770 clat (usec): min=217, max=781, avg=288.49, stdev=90.71 00:16:43.770 lat (usec): min=228, max=853, avg=301.94, stdev=91.51 00:16:43.770 clat percentiles (usec): 00:16:43.770 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 237], 00:16:43.770 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 262], 00:16:43.770 | 70.00th=[ 277], 80.00th=[ 314], 90.00th=[ 416], 95.00th=[ 474], 00:16:43.770 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 783], 99.95th=[ 783], 00:16:43.770 | 99.99th=[ 783] 00:16:43.770 bw ( KiB/s): min= 4096, max= 4096, per=32.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:43.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:43.770 lat (usec) : 250=42.44%, 500=48.28%, 750=5.15%, 1000=0.52% 00:16:43.770 lat (msec) : 2=0.17%, 50=3.44% 00:16:43.770 cpu : usr=0.78%, sys=0.78%, ctx=583, majf=0, minf=2 00:16:43.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 issued rwts: total=70,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.770 job1: (groupid=0, jobs=1): err= 0: pid=3743297: Wed May 15 15:54:41 2024 00:16:43.770 read: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec) 00:16:43.770 slat (nsec): min=8693, max=45434, avg=9635.57, stdev=2160.91 00:16:43.770 clat (usec): min=308, max=1272, avg=505.48, stdev=109.18 00:16:43.770 lat (usec): min=317, max=1298, avg=515.11, stdev=109.43 00:16:43.770 clat percentiles (usec): 00:16:43.770 | 1.00th=[ 334], 5.00th=[ 367], 10.00th=[ 396], 20.00th=[ 429], 00:16:43.770 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 498], 60.00th=[ 510], 00:16:43.770 | 70.00th=[ 529], 80.00th=[ 562], 90.00th=[ 627], 95.00th=[ 685], 00:16:43.770 | 99.00th=[ 857], 99.50th=[ 1123], 99.90th=[ 1270], 99.95th=[ 1270], 00:16:43.770 | 99.99th=[ 1270] 00:16:43.770 write: IOPS=1219, BW=4879KiB/s (4996kB/s)(4908KiB/1006msec); 0 zone resets 00:16:43.770 slat (usec): min=10, max=25724, avg=34.00, stdev=734.02 00:16:43.770 clat (usec): min=190, max=995, avg=350.59, stdev=110.48 00:16:43.770 lat (usec): min=204, max=26402, avg=384.59, stdev=751.56 00:16:43.770 clat percentiles (usec): 00:16:43.770 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 247], 00:16:43.770 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 326], 60.00th=[ 347], 00:16:43.770 | 70.00th=[ 412], 80.00th=[ 494], 90.00th=[ 510], 95.00th=[ 519], 00:16:43.770 | 99.00th=[ 627], 99.50th=[ 685], 99.90th=[ 742], 99.95th=[ 996], 00:16:43.770 | 99.99th=[ 996] 00:16:43.770 bw ( KiB/s): min= 4112, max= 5696, per=38.60%, avg=4904.00, stdev=1120.06, samples=2 00:16:43.770 iops : min= 1028, max= 1424, avg=1226.00, stdev=280.01, samples=2 00:16:43.770 lat (usec) : 250=11.86%, 500=58.46%, 750=28.74%, 1000=0.49% 00:16:43.770 lat (msec) : 2=0.44% 00:16:43.770 cpu : usr=1.99%, sys=3.28%, ctx=2253, majf=0, minf=1 00:16:43.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 issued rwts: total=1024,1227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.770 job2: (groupid=0, jobs=1): err= 0: pid=3743298: Wed May 15 15:54:41 2024 00:16:43.770 read: IOPS=20, BW=81.5KiB/s (83.4kB/s)(84.0KiB/1031msec) 00:16:43.770 slat (nsec): min=11764, max=26660, avg=23810.90, stdev=4738.06 00:16:43.770 clat (usec): min=40937, max=42060, avg=41640.08, stdev=451.13 00:16:43.770 lat (usec): min=40962, max=42087, avg=41663.89, stdev=454.04 00:16:43.770 clat percentiles (usec): 00:16:43.770 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:43.770 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:16:43.770 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:43.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:43.770 | 99.99th=[42206] 00:16:43.770 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:16:43.770 slat (nsec): min=8652, max=45871, avg=12983.81, stdev=2783.68 00:16:43.770 clat (usec): min=198, max=683, avg=289.23, stdev=89.45 00:16:43.770 lat (usec): min=211, max=724, avg=302.21, stdev=90.30 00:16:43.770 clat percentiles (usec): 00:16:43.770 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:16:43.770 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 273], 00:16:43.770 | 70.00th=[ 289], 80.00th=[ 343], 90.00th=[ 416], 95.00th=[ 515], 00:16:43.770 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 685], 99.95th=[ 685], 00:16:43.770 | 99.99th=[ 685] 00:16:43.770 bw ( KiB/s): min= 4096, max= 4096, per=32.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:43.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:43.770 lat (usec) : 250=46.34%, 500=42.21%, 750=7.50% 00:16:43.770 lat (msec) : 50=3.94% 00:16:43.770 cpu : usr=0.49%, sys=0.49%, ctx=536, majf=0, minf=1 00:16:43.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.770 job3: (groupid=0, jobs=1): err= 0: pid=3743299: Wed May 15 15:54:41 2024 00:16:43.770 read: IOPS=509, BW=2040KiB/s (2089kB/s)(2060KiB/1010msec) 00:16:43.770 slat (nsec): min=8777, max=53703, avg=15737.70, stdev=7937.12 00:16:43.770 clat (usec): min=327, max=42036, avg=1286.25, stdev=5104.34 00:16:43.770 lat (usec): min=336, max=42062, avg=1301.99, stdev=5105.43 00:16:43.770 clat percentiles (usec): 00:16:43.770 | 1.00th=[ 347], 5.00th=[ 392], 10.00th=[ 449], 20.00th=[ 529], 00:16:43.770 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 652], 00:16:43.770 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 857], 00:16:43.770 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:43.770 | 99.99th=[42206] 00:16:43.770 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:16:43.770 slat (nsec): min=11656, max=39565, avg=13069.20, stdev=1994.64 00:16:43.770 clat (usec): min=179, max=1084, avg=313.86, stdev=104.83 00:16:43.770 lat (usec): min=191, max=1097, avg=326.93, stdev=105.19 00:16:43.770 clat percentiles (usec): 00:16:43.770 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 215], 00:16:43.770 | 30.00th=[ 233], 40.00th=[ 260], 50.00th=[ 302], 60.00th=[ 351], 00:16:43.770 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 429], 95.00th=[ 529], 00:16:43.770 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 750], 99.95th=[ 1090], 00:16:43.770 | 99.99th=[ 1090] 00:16:43.770 bw ( KiB/s): min= 4096, max= 4096, per=32.24%, avg=4096.00, stdev= 0.00, samples=2 00:16:43.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:16:43.770 lat (usec) : 250=24.30%, 500=42.11%, 750=30.02%, 1000=2.92% 00:16:43.770 lat (msec) : 2=0.06%, 50=0.58% 00:16:43.770 cpu : usr=1.39%, sys=2.18%, ctx=1540, majf=0, minf=1 00:16:43.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.770 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.770 00:16:43.770 Run status group 0 (all jobs): 00:16:43.770 READ: bw=6324KiB/s (6476kB/s), 81.5KiB/s-4072KiB/s (83.4kB/s-4169kB/s), io=6520KiB (6676kB), run=1006-1031msec 00:16:43.770 WRITE: bw=12.4MiB/s (13.0MB/s), 1986KiB/s-4879KiB/s (2034kB/s-4996kB/s), io=12.8MiB (13.4MB), run=1006-1031msec 00:16:43.770 00:16:43.770 Disk stats (read/write): 00:16:43.770 nvme0n1: ios=115/512, merge=0/0, ticks=711/141, in_queue=852, util=86.17% 00:16:43.770 nvme0n2: ios=974/1024, merge=0/0, ticks=1413/319, in_queue=1732, util=95.81% 00:16:43.770 nvme0n3: ios=58/512, merge=0/0, ticks=1861/146, in_queue=2007, util=97.34% 00:16:43.770 nvme0n4: ios=551/784, merge=0/0, ticks=1441/267, in_queue=1708, util=96.66% 00:16:43.770 15:54:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:43.770 [global] 00:16:43.770 thread=1 00:16:43.770 invalidate=1 00:16:43.770 rw=write 00:16:43.770 time_based=1 00:16:43.770 runtime=1 00:16:43.770 ioengine=libaio 00:16:43.770 direct=1 00:16:43.770 bs=4096 00:16:43.770 iodepth=128 00:16:43.770 norandommap=0 00:16:43.770 numjobs=1 00:16:43.770 00:16:43.771 verify_dump=1 00:16:43.771 verify_backlog=512 00:16:43.771 verify_state_save=0 00:16:43.771 do_verify=1 00:16:43.771 verify=crc32c-intel 00:16:43.771 [job0] 00:16:43.771 filename=/dev/nvme0n1 00:16:43.771 [job1] 00:16:43.771 filename=/dev/nvme0n2 00:16:43.771 [job2] 00:16:43.771 filename=/dev/nvme0n3 00:16:43.771 [job3] 00:16:43.771 filename=/dev/nvme0n4 00:16:43.771 Could not set queue depth (nvme0n1) 00:16:43.771 Could not set queue depth (nvme0n2) 00:16:43.771 Could not set queue depth (nvme0n3) 00:16:43.771 Could not set queue depth (nvme0n4) 00:16:44.030 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:44.030 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:44.030 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:44.030 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:44.030 fio-3.35 00:16:44.030 Starting 4 threads 00:16:45.418 00:16:45.418 job0: (groupid=0, jobs=1): err= 0: pid=3743723: Wed May 15 15:54:43 2024 00:16:45.418 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:16:45.418 slat (nsec): min=1956, max=11830k, avg=114109.09, stdev=679507.34 00:16:45.418 clat (usec): min=7319, max=38872, avg=15005.70, stdev=5737.43 00:16:45.418 lat (usec): min=7330, max=38891, avg=15119.81, stdev=5792.70 00:16:45.418 clat percentiles (usec): 00:16:45.418 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9634], 00:16:45.418 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13042], 60.00th=[15533], 00:16:45.418 | 70.00th=[17695], 80.00th=[20055], 90.00th=[22414], 95.00th=[25560], 00:16:45.418 | 99.00th=[31065], 99.50th=[34866], 99.90th=[35390], 99.95th=[35914], 00:16:45.418 | 99.99th=[39060] 00:16:45.418 write: IOPS=4142, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1006msec); 0 zone resets 00:16:45.418 slat (usec): min=2, max=10976, avg=117.98, stdev=624.46 00:16:45.418 clat (usec): min=1916, max=30636, avg=15863.29, stdev=5208.34 00:16:45.418 lat (usec): min=1931, max=30659, avg=15981.26, stdev=5232.60 00:16:45.418 clat percentiles (usec): 00:16:45.418 | 1.00th=[ 4490], 5.00th=[ 7046], 10.00th=[ 8717], 20.00th=[11338], 00:16:45.418 | 30.00th=[12780], 40.00th=[14877], 50.00th=[16909], 60.00th=[17695], 00:16:45.418 | 70.00th=[18482], 80.00th=[19268], 90.00th=[22676], 95.00th=[23987], 00:16:45.418 | 99.00th=[28967], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:16:45.418 | 99.99th=[30540] 00:16:45.418 bw ( KiB/s): min=12288, max=20480, per=24.68%, avg=16384.00, stdev=5792.62, samples=2 00:16:45.418 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:16:45.418 lat (msec) : 2=0.07%, 4=0.18%, 10=19.36%, 20=61.22%, 50=19.16% 00:16:45.418 cpu : usr=3.88%, sys=5.97%, ctx=477, majf=0, minf=1 00:16:45.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:45.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.418 issued rwts: total=4096,4167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.418 job1: (groupid=0, jobs=1): err= 0: pid=3743724: Wed May 15 15:54:43 2024 00:16:45.418 read: IOPS=4037, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1002msec) 00:16:45.418 slat (nsec): min=1911, max=15538k, avg=119367.69, stdev=756278.67 00:16:45.418 clat (usec): min=1011, max=40643, avg=15075.79, stdev=5375.76 00:16:45.418 lat (usec): min=3783, max=40651, avg=15195.16, stdev=5411.67 00:16:45.418 clat percentiles (usec): 00:16:45.418 | 1.00th=[ 4228], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10814], 00:16:45.418 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13173], 60.00th=[14484], 00:16:45.418 | 70.00th=[16909], 80.00th=[20055], 90.00th=[22152], 95.00th=[25297], 00:16:45.418 | 99.00th=[34341], 99.50th=[35914], 99.90th=[40633], 99.95th=[40633], 00:16:45.418 | 99.99th=[40633] 00:16:45.418 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:16:45.418 slat (usec): min=2, max=11784, avg=116.58, stdev=575.88 00:16:45.418 clat (usec): min=2300, max=45322, avg=15914.56, stdev=6990.97 00:16:45.418 lat (usec): min=2310, max=45326, avg=16031.14, stdev=7023.64 00:16:45.418 clat percentiles (usec): 00:16:45.418 | 1.00th=[ 5342], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10945], 00:16:45.418 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13829], 60.00th=[15533], 00:16:45.418 | 70.00th=[17957], 80.00th=[20579], 90.00th=[23200], 95.00th=[30278], 00:16:45.418 | 99.00th=[42730], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:16:45.418 | 99.99th=[45351] 00:16:45.418 bw ( KiB/s): min=16384, max=16384, per=24.68%, avg=16384.00, stdev= 0.00, samples=2 00:16:45.418 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:45.418 lat (msec) : 2=0.01%, 4=0.38%, 10=9.47%, 20=69.58%, 50=20.56% 00:16:45.418 cpu : usr=2.80%, sys=5.00%, ctx=533, majf=0, minf=1 00:16:45.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:45.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.418 issued rwts: total=4046,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.418 job2: (groupid=0, jobs=1): err= 0: pid=3743725: Wed May 15 15:54:43 2024 00:16:45.418 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:16:45.418 slat (usec): min=2, max=14182, avg=122.01, stdev=767.43 00:16:45.418 clat (usec): min=8763, max=41150, avg=16488.75, stdev=4672.93 00:16:45.418 lat (usec): min=8773, max=46000, avg=16610.76, stdev=4732.95 00:16:45.418 clat percentiles (usec): 00:16:45.418 | 1.00th=[10028], 5.00th=[11600], 10.00th=[12125], 20.00th=[12911], 00:16:45.418 | 30.00th=[13566], 40.00th=[14091], 50.00th=[15270], 60.00th=[16057], 00:16:45.418 | 70.00th=[17695], 80.00th=[20055], 90.00th=[21890], 95.00th=[25560], 00:16:45.418 | 99.00th=[33817], 99.50th=[35390], 99.90th=[41157], 99.95th=[41157], 00:16:45.418 | 99.99th=[41157] 00:16:45.418 write: IOPS=3605, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1007msec); 0 zone resets 00:16:45.418 slat (usec): min=3, max=14517, avg=147.42, stdev=750.54 00:16:45.418 clat (usec): min=5877, max=35678, avg=18666.86, stdev=6035.51 00:16:45.419 lat (usec): min=7509, max=35684, avg=18814.28, stdev=6068.14 00:16:45.419 clat percentiles (usec): 00:16:45.419 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11076], 20.00th=[12649], 00:16:45.419 | 30.00th=[15008], 40.00th=[16712], 50.00th=[18220], 60.00th=[19006], 00:16:45.419 | 70.00th=[21627], 80.00th=[24249], 90.00th=[26608], 95.00th=[30016], 00:16:45.419 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:16:45.419 | 99.99th=[35914] 00:16:45.419 bw ( KiB/s): min=12288, max=16384, per=21.60%, avg=14336.00, stdev=2896.31, samples=2 00:16:45.419 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:16:45.419 lat (msec) : 10=2.41%, 20=69.18%, 50=28.41% 00:16:45.419 cpu : usr=4.57%, sys=3.38%, ctx=485, majf=0, minf=1 00:16:45.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:45.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.419 issued rwts: total=3584,3631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.419 job3: (groupid=0, jobs=1): err= 0: pid=3743726: Wed May 15 15:54:43 2024 00:16:45.419 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:16:45.419 slat (nsec): min=1766, max=16961k, avg=99798.36, stdev=721033.51 00:16:45.419 clat (usec): min=1242, max=27737, avg=13787.26, stdev=4319.45 00:16:45.419 lat (usec): min=1248, max=27741, avg=13887.06, stdev=4356.20 00:16:45.419 clat percentiles (usec): 00:16:45.419 | 1.00th=[ 2311], 5.00th=[ 7373], 10.00th=[ 9241], 20.00th=[10552], 00:16:45.419 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12911], 60.00th=[14222], 00:16:45.419 | 70.00th=[15664], 80.00th=[17433], 90.00th=[19268], 95.00th=[21890], 00:16:45.419 | 99.00th=[25297], 99.50th=[26084], 99.90th=[27657], 99.95th=[27657], 00:16:45.419 | 99.99th=[27657] 00:16:45.419 write: IOPS=4783, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1007msec); 0 zone resets 00:16:45.419 slat (usec): min=2, max=9047, avg=101.30, stdev=531.18 00:16:45.419 clat (usec): min=433, max=27734, avg=13333.58, stdev=3957.85 00:16:45.419 lat (usec): min=468, max=27739, avg=13434.88, stdev=3971.46 00:16:45.419 clat percentiles (usec): 00:16:45.419 | 1.00th=[ 5997], 5.00th=[ 7439], 10.00th=[ 8455], 20.00th=[ 9765], 00:16:45.419 | 30.00th=[11207], 40.00th=[12125], 50.00th=[12911], 60.00th=[14091], 00:16:45.419 | 70.00th=[15008], 80.00th=[16581], 90.00th=[19006], 95.00th=[20579], 00:16:45.419 | 99.00th=[22676], 99.50th=[23200], 99.90th=[25035], 99.95th=[26608], 00:16:45.419 | 99.99th=[27657] 00:16:45.419 bw ( KiB/s): min=17392, max=20128, per=28.26%, avg=18760.00, stdev=1934.64, samples=2 00:16:45.419 iops : min= 4348, max= 5032, avg=4690.00, stdev=483.66, samples=2 00:16:45.419 lat (usec) : 500=0.01% 00:16:45.419 lat (msec) : 2=0.06%, 4=0.83%, 10=17.01%, 20=74.11%, 50=7.98% 00:16:45.419 cpu : usr=2.58%, sys=5.67%, ctx=513, majf=0, minf=1 00:16:45.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:45.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.419 issued rwts: total=4608,4817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.419 00:16:45.419 Run status group 0 (all jobs): 00:16:45.419 READ: bw=63.4MiB/s (66.4MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.7MB/s), io=63.8MiB (66.9MB), run=1002-1007msec 00:16:45.419 WRITE: bw=64.8MiB/s (68.0MB/s), 14.1MiB/s-18.7MiB/s (14.8MB/s-19.6MB/s), io=65.3MiB (68.4MB), run=1002-1007msec 00:16:45.419 00:16:45.419 Disk stats (read/write): 00:16:45.419 nvme0n1: ios=3446/3584, merge=0/0, ticks=33662/42744, in_queue=76406, util=86.57% 00:16:45.419 nvme0n2: ios=3205/3584, merge=0/0, ticks=31211/38680, in_queue=69891, util=90.49% 00:16:45.419 nvme0n3: ios=3121/3095, merge=0/0, ticks=25562/25834, in_queue=51396, util=91.50% 00:16:45.419 nvme0n4: ios=3869/4096, merge=0/0, ticks=41320/42752, in_queue=84072, util=96.99% 00:16:45.419 15:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:45.419 [global] 00:16:45.419 thread=1 00:16:45.419 invalidate=1 00:16:45.419 rw=randwrite 00:16:45.419 time_based=1 00:16:45.419 runtime=1 00:16:45.419 ioengine=libaio 00:16:45.419 direct=1 00:16:45.419 bs=4096 00:16:45.419 iodepth=128 00:16:45.419 norandommap=0 00:16:45.419 numjobs=1 00:16:45.419 00:16:45.419 verify_dump=1 00:16:45.419 verify_backlog=512 00:16:45.419 verify_state_save=0 00:16:45.419 do_verify=1 00:16:45.419 verify=crc32c-intel 00:16:45.419 [job0] 00:16:45.419 filename=/dev/nvme0n1 00:16:45.419 [job1] 00:16:45.419 filename=/dev/nvme0n2 00:16:45.419 [job2] 00:16:45.419 filename=/dev/nvme0n3 00:16:45.419 [job3] 00:16:45.419 filename=/dev/nvme0n4 00:16:45.419 Could not set queue depth (nvme0n1) 00:16:45.419 Could not set queue depth (nvme0n2) 00:16:45.419 Could not set queue depth (nvme0n3) 00:16:45.419 Could not set queue depth (nvme0n4) 00:16:45.679 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.679 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.679 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.679 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.679 fio-3.35 00:16:45.679 Starting 4 threads 00:16:47.091 00:16:47.091 job0: (groupid=0, jobs=1): err= 0: pid=3744143: Wed May 15 15:54:45 2024 00:16:47.091 read: IOPS=4940, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1005msec) 00:16:47.091 slat (usec): min=2, max=10731, avg=90.24, stdev=585.67 00:16:47.091 clat (usec): min=4024, max=25712, avg=11998.55, stdev=3300.16 00:16:47.091 lat (usec): min=4030, max=25723, avg=12088.78, stdev=3330.89 00:16:47.091 clat percentiles (usec): 00:16:47.091 | 1.00th=[ 6521], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8979], 00:16:47.091 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11338], 60.00th=[12256], 00:16:47.091 | 70.00th=[13304], 80.00th=[14615], 90.00th=[16909], 95.00th=[18220], 00:16:47.091 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24511], 99.95th=[24511], 00:16:47.091 | 99.99th=[25822] 00:16:47.091 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:16:47.091 slat (usec): min=3, max=10344, avg=99.28, stdev=513.50 00:16:47.091 clat (usec): min=1919, max=27868, avg=13243.61, stdev=4962.32 00:16:47.091 lat (usec): min=1935, max=27874, avg=13342.90, stdev=4982.95 00:16:47.091 clat percentiles (usec): 00:16:47.091 | 1.00th=[ 4883], 5.00th=[ 5669], 10.00th=[ 7504], 20.00th=[ 8848], 00:16:47.091 | 30.00th=[10028], 40.00th=[11469], 50.00th=[12256], 60.00th=[13829], 00:16:47.091 | 70.00th=[15795], 80.00th=[17957], 90.00th=[20841], 95.00th=[21890], 00:16:47.091 | 99.00th=[25297], 99.50th=[25822], 99.90th=[27132], 99.95th=[27132], 00:16:47.091 | 99.99th=[27919] 00:16:47.091 bw ( KiB/s): min=19392, max=21568, per=31.44%, avg=20480.00, stdev=1538.66, samples=2 00:16:47.091 iops : min= 4848, max= 5392, avg=5120.00, stdev=384.67, samples=2 00:16:47.091 lat (msec) : 2=0.03%, 4=0.03%, 10=27.96%, 20=64.75%, 50=7.23% 00:16:47.091 cpu : usr=5.88%, sys=6.87%, ctx=453, majf=0, minf=1 00:16:47.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:47.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.091 issued rwts: total=4965,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.091 job1: (groupid=0, jobs=1): err= 0: pid=3744146: Wed May 15 15:54:45 2024 00:16:47.091 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:16:47.091 slat (nsec): min=1756, max=19868k, avg=160945.00, stdev=1077681.20 00:16:47.091 clat (usec): min=1507, max=68637, avg=20895.19, stdev=13698.41 00:16:47.091 lat (usec): min=5285, max=68668, avg=21056.14, stdev=13801.06 00:16:47.091 clat percentiles (usec): 00:16:47.091 | 1.00th=[ 5604], 5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 9896], 00:16:47.091 | 30.00th=[12387], 40.00th=[13435], 50.00th=[15270], 60.00th=[18482], 00:16:47.091 | 70.00th=[23987], 80.00th=[31851], 90.00th=[44303], 95.00th=[49546], 00:16:47.091 | 99.00th=[57934], 99.50th=[58983], 99.90th=[58983], 99.95th=[67634], 00:16:47.091 | 99.99th=[68682] 00:16:47.091 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:16:47.091 slat (usec): min=2, max=16901, avg=149.84, stdev=980.65 00:16:47.091 clat (usec): min=1524, max=69624, avg=20176.23, stdev=9829.01 00:16:47.091 lat (usec): min=1562, max=69634, avg=20326.06, stdev=9894.24 00:16:47.091 clat percentiles (usec): 00:16:47.091 | 1.00th=[ 6521], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[11207], 00:16:47.091 | 30.00th=[13435], 40.00th=[16319], 50.00th=[18744], 60.00th=[20841], 00:16:47.091 | 70.00th=[22414], 80.00th=[27132], 90.00th=[34341], 95.00th=[39060], 00:16:47.091 | 99.00th=[50070], 99.50th=[55837], 99.90th=[64750], 99.95th=[64750], 00:16:47.091 | 99.99th=[69731] 00:16:47.091 bw ( KiB/s): min=12288, max=12312, per=18.88%, avg=12300.00, stdev=16.97, samples=2 00:16:47.091 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:16:47.091 lat (msec) : 2=0.08%, 4=0.03%, 10=16.70%, 20=42.09%, 50=38.35% 00:16:47.091 lat (msec) : 100=2.75% 00:16:47.091 cpu : usr=3.29%, sys=4.39%, ctx=300, majf=0, minf=1 00:16:47.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:47.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.091 issued rwts: total=3072,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.091 job2: (groupid=0, jobs=1): err= 0: pid=3744147: Wed May 15 15:54:45 2024 00:16:47.091 read: IOPS=3336, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1004msec) 00:16:47.091 slat (nsec): min=1863, max=25329k, avg=113142.56, stdev=967669.50 00:16:47.091 clat (usec): min=3104, max=50943, avg=16881.98, stdev=8025.06 00:16:47.091 lat (usec): min=3226, max=63402, avg=16995.13, stdev=8114.57 00:16:47.091 clat percentiles (usec): 00:16:47.091 | 1.00th=[ 4146], 5.00th=[ 6390], 10.00th=[ 9241], 20.00th=[11731], 00:16:47.091 | 30.00th=[12387], 40.00th=[13566], 50.00th=[14484], 60.00th=[15664], 00:16:47.091 | 70.00th=[19268], 80.00th=[22676], 90.00th=[27657], 95.00th=[30540], 00:16:47.091 | 99.00th=[42730], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:16:47.091 | 99.99th=[51119] 00:16:47.091 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:16:47.091 slat (usec): min=2, max=14668, avg=127.71, stdev=737.64 00:16:47.091 clat (msec): min=2, max=102, avg=19.75, stdev=16.29 00:16:47.091 lat (msec): min=2, max=102, avg=19.88, stdev=16.36 00:16:47.091 clat percentiles (msec): 00:16:47.091 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 10], 00:16:47.091 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:16:47.091 | 70.00th=[ 18], 80.00th=[ 25], 90.00th=[ 44], 95.00th=[ 59], 00:16:47.091 | 99.00th=[ 86], 99.50th=[ 99], 99.90th=[ 100], 99.95th=[ 100], 00:16:47.091 | 99.99th=[ 104] 00:16:47.091 bw ( KiB/s): min=10244, max=18448, per=22.02%, avg=14346.00, stdev=5801.10, samples=2 00:16:47.091 iops : min= 2561, max= 4612, avg=3586.50, stdev=1450.28, samples=2 00:16:47.091 lat (msec) : 4=1.10%, 10=16.05%, 20=57.85%, 50=20.39%, 100=4.60% 00:16:47.091 lat (msec) : 250=0.01% 00:16:47.091 cpu : usr=3.69%, sys=4.09%, ctx=522, majf=0, minf=1 00:16:47.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:47.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.092 issued rwts: total=3350,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.092 job3: (groupid=0, jobs=1): err= 0: pid=3744150: Wed May 15 15:54:45 2024 00:16:47.092 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:16:47.092 slat (nsec): min=1886, max=27548k, avg=104541.14, stdev=811260.19 00:16:47.092 clat (usec): min=3079, max=40203, avg=14490.24, stdev=5653.40 00:16:47.092 lat (usec): min=3094, max=40209, avg=14594.79, stdev=5687.22 00:16:47.092 clat percentiles (usec): 00:16:47.092 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10552], 00:16:47.092 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12780], 60.00th=[14353], 00:16:47.092 | 70.00th=[15270], 80.00th=[16909], 90.00th=[21627], 95.00th=[27657], 00:16:47.092 | 99.00th=[35390], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:16:47.092 | 99.99th=[40109] 00:16:47.092 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:16:47.092 slat (usec): min=2, max=9641, avg=112.06, stdev=614.17 00:16:47.092 clat (usec): min=968, max=47679, avg=14780.18, stdev=7811.75 00:16:47.092 lat (usec): min=981, max=47690, avg=14892.24, stdev=7848.39 00:16:47.092 clat percentiles (usec): 00:16:47.092 | 1.00th=[ 3064], 5.00th=[ 6194], 10.00th=[ 7308], 20.00th=[ 9110], 00:16:47.092 | 30.00th=[10159], 40.00th=[11338], 50.00th=[12780], 60.00th=[14091], 00:16:47.092 | 70.00th=[16319], 80.00th=[19792], 90.00th=[25297], 95.00th=[31327], 00:16:47.092 | 99.00th=[42206], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:16:47.092 | 99.99th=[47449] 00:16:47.092 bw ( KiB/s): min=15400, max=20521, per=27.57%, avg=17960.50, stdev=3621.09, samples=2 00:16:47.092 iops : min= 3850, max= 5130, avg=4490.00, stdev=905.10, samples=2 00:16:47.092 lat (usec) : 1000=0.02% 00:16:47.092 lat (msec) : 2=0.33%, 4=0.83%, 10=20.92%, 20=62.22%, 50=15.68% 00:16:47.092 cpu : usr=5.47%, sys=6.27%, ctx=400, majf=0, minf=1 00:16:47.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:47.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.092 issued rwts: total=4100,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.092 00:16:47.092 Run status group 0 (all jobs): 00:16:47.092 READ: bw=60.1MiB/s (63.1MB/s), 12.0MiB/s-19.3MiB/s (12.5MB/s-20.2MB/s), io=60.5MiB (63.4MB), run=1003-1006msec 00:16:47.092 WRITE: bw=63.6MiB/s (66.7MB/s), 12.0MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1003-1006msec 00:16:47.092 00:16:47.092 Disk stats (read/write): 00:16:47.092 nvme0n1: ios=4145/4140, merge=0/0, ticks=48276/51079, in_queue=99355, util=89.58% 00:16:47.092 nvme0n2: ios=2077/2534, merge=0/0, ticks=20242/21005, in_queue=41247, util=94.03% 00:16:47.092 nvme0n3: ios=2534/2560, merge=0/0, ticks=39707/53902, in_queue=93609, util=99.35% 00:16:47.092 nvme0n4: ios=3398/3584, merge=0/0, ticks=46858/51646, in_queue=98504, util=97.90% 00:16:47.092 15:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:47.092 15:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3744421 00:16:47.092 15:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:47.092 15:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:47.092 [global] 00:16:47.092 thread=1 00:16:47.092 invalidate=1 00:16:47.092 rw=read 00:16:47.092 time_based=1 00:16:47.092 runtime=10 00:16:47.092 ioengine=libaio 00:16:47.092 direct=1 00:16:47.092 bs=4096 00:16:47.092 iodepth=1 00:16:47.092 norandommap=1 00:16:47.092 numjobs=1 00:16:47.092 00:16:47.092 [job0] 00:16:47.092 filename=/dev/nvme0n1 00:16:47.092 [job1] 00:16:47.092 filename=/dev/nvme0n2 00:16:47.092 [job2] 00:16:47.092 filename=/dev/nvme0n3 00:16:47.092 [job3] 00:16:47.092 filename=/dev/nvme0n4 00:16:47.092 Could not set queue depth (nvme0n1) 00:16:47.092 Could not set queue depth (nvme0n2) 00:16:47.092 Could not set queue depth (nvme0n3) 00:16:47.092 Could not set queue depth (nvme0n4) 00:16:47.355 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.355 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.355 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.355 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.355 fio-3.35 00:16:47.355 Starting 4 threads 00:16:49.885 15:54:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:50.143 15:54:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:50.143 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=10616832, buflen=4096 00:16:50.143 fio: pid=3744589, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:50.143 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=20516864, buflen=4096 00:16:50.143 fio: pid=3744582, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:50.143 15:54:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:50.143 15:54:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:50.402 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=532480, buflen=4096 00:16:50.402 fio: pid=3744573, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:50.402 15:54:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:50.402 15:54:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:50.660 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:50.660 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:50.661 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1511424, buflen=4096 00:16:50.661 fio: pid=3744574, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:50.661 00:16:50.661 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3744573: Wed May 15 15:54:49 2024 00:16:50.661 read: IOPS=44, BW=177KiB/s (181kB/s)(520KiB/2938msec) 00:16:50.661 slat (nsec): min=7550, max=79244, avg=18929.08, stdev=9828.51 00:16:50.661 clat (usec): min=765, max=43011, avg=22416.32, stdev=20606.53 00:16:50.661 lat (usec): min=773, max=43035, avg=22435.21, stdev=20613.20 00:16:50.661 clat percentiles (usec): 00:16:50.661 | 1.00th=[ 775], 5.00th=[ 807], 10.00th=[ 832], 20.00th=[ 889], 00:16:50.661 | 30.00th=[ 922], 40.00th=[ 1029], 50.00th=[41681], 60.00th=[41681], 00:16:50.661 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:50.661 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:50.661 | 99.99th=[43254] 00:16:50.661 bw ( KiB/s): min= 96, max= 560, per=1.87%, avg=192.00, stdev=205.76, samples=5 00:16:50.661 iops : min= 24, max= 140, avg=48.00, stdev=51.44, samples=5 00:16:50.661 lat (usec) : 1000=38.17% 00:16:50.661 lat (msec) : 2=9.16%, 50=51.91% 00:16:50.661 cpu : usr=0.00%, sys=0.14%, ctx=137, majf=0, minf=1 00:16:50.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 issued rwts: total=131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.661 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3744574: Wed May 15 15:54:49 2024 00:16:50.661 read: IOPS=117, BW=467KiB/s (478kB/s)(1476KiB/3162msec) 00:16:50.661 slat (usec): min=8, max=19614, avg=118.16, stdev=1253.28 00:16:50.661 clat (usec): min=472, max=42137, avg=8447.16, stdev=16222.41 00:16:50.661 lat (usec): min=481, max=61060, avg=8547.67, stdev=16448.18 00:16:50.661 clat percentiles (usec): 00:16:50.661 | 1.00th=[ 490], 5.00th=[ 498], 10.00th=[ 510], 20.00th=[ 523], 00:16:50.661 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 627], 00:16:50.661 | 70.00th=[ 701], 80.00th=[ 1074], 90.00th=[42206], 95.00th=[42206], 00:16:50.661 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:50.661 | 99.99th=[42206] 00:16:50.661 bw ( KiB/s): min= 84, max= 2448, per=4.74%, avg=486.00, stdev=961.19, samples=6 00:16:50.661 iops : min= 21, max= 612, avg=121.50, stdev=240.30, samples=6 00:16:50.661 lat (usec) : 500=6.76%, 750=65.95%, 1000=6.76% 00:16:50.661 lat (msec) : 2=1.35%, 50=18.92% 00:16:50.661 cpu : usr=0.06%, sys=0.35%, ctx=373, majf=0, minf=1 00:16:50.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 issued rwts: total=370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.661 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3744582: Wed May 15 15:54:49 2024 00:16:50.661 read: IOPS=1809, BW=7236KiB/s (7409kB/s)(19.6MiB/2769msec) 00:16:50.661 slat (usec): min=8, max=13592, avg=14.98, stdev=248.23 00:16:50.661 clat (usec): min=426, max=1301, avg=530.90, stdev=51.84 00:16:50.661 lat (usec): min=435, max=14448, avg=545.88, stdev=260.21 00:16:50.661 clat percentiles (usec): 00:16:50.661 | 1.00th=[ 449], 5.00th=[ 461], 10.00th=[ 474], 20.00th=[ 486], 00:16:50.661 | 30.00th=[ 502], 40.00th=[ 519], 50.00th=[ 537], 60.00th=[ 545], 00:16:50.661 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 570], 95.00th=[ 603], 00:16:50.661 | 99.00th=[ 717], 99.50th=[ 766], 99.90th=[ 922], 99.95th=[ 996], 00:16:50.661 | 99.99th=[ 1303] 00:16:50.661 bw ( KiB/s): min= 7056, max= 8048, per=72.20%, avg=7398.40, stdev=421.71, samples=5 00:16:50.661 iops : min= 1764, max= 2012, avg=1849.60, stdev=105.43, samples=5 00:16:50.661 lat (usec) : 500=29.68%, 750=69.68%, 1000=0.58% 00:16:50.661 lat (msec) : 2=0.04% 00:16:50.661 cpu : usr=1.41%, sys=3.07%, ctx=5013, majf=0, minf=1 00:16:50.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 issued rwts: total=5010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.661 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3744589: Wed May 15 15:54:49 2024 00:16:50.661 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(10.1MiB/2592msec) 00:16:50.661 slat (nsec): min=6566, max=49280, avg=10014.04, stdev=2368.35 00:16:50.661 clat (usec): min=483, max=42110, avg=979.96, stdev=4125.71 00:16:50.661 lat (usec): min=492, max=42133, avg=989.97, stdev=4127.11 00:16:50.661 clat percentiles (usec): 00:16:50.661 | 1.00th=[ 502], 5.00th=[ 523], 10.00th=[ 529], 20.00th=[ 545], 00:16:50.661 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 553], 60.00th=[ 562], 00:16:50.661 | 70.00th=[ 570], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 635], 00:16:50.661 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:50.661 | 99.99th=[42206] 00:16:50.661 bw ( KiB/s): min= 96, max= 7056, per=40.44%, avg=4144.00, stdev=3702.04, samples=5 00:16:50.661 iops : min= 24, max= 1764, avg=1036.00, stdev=925.51, samples=5 00:16:50.661 lat (usec) : 500=0.66%, 750=97.07%, 1000=0.89% 00:16:50.661 lat (msec) : 2=0.27%, 4=0.04%, 10=0.04%, 50=1.00% 00:16:50.661 cpu : usr=0.73%, sys=1.78%, ctx=2593, majf=0, minf=2 00:16:50.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.661 issued rwts: total=2593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.661 00:16:50.661 Run status group 0 (all jobs): 00:16:50.661 READ: bw=10.0MiB/s (10.5MB/s), 177KiB/s-7236KiB/s (181kB/s-7409kB/s), io=31.6MiB (33.2MB), run=2592-3162msec 00:16:50.661 00:16:50.661 Disk stats (read/write): 00:16:50.661 nvme0n1: ios=126/0, merge=0/0, ticks=2745/0, in_queue=2745, util=92.69% 00:16:50.661 nvme0n2: ios=365/0, merge=0/0, ticks=2949/0, in_queue=2949, util=93.27% 00:16:50.661 nvme0n3: ios=4665/0, merge=0/0, ticks=2348/0, in_queue=2348, util=95.54% 00:16:50.661 nvme0n4: ios=2590/0, merge=0/0, ticks=2434/0, in_queue=2434, util=96.37% 00:16:50.919 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:50.919 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:50.919 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:50.919 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:51.177 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:51.177 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:51.435 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:51.435 15:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3744421 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:51.693 nvmf hotplug test: fio failed as expected 00:16:51.693 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.951 rmmod nvme_tcp 00:16:51.951 rmmod nvme_fabrics 00:16:51.951 rmmod nvme_keyring 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3741328 ']' 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3741328 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3741328 ']' 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3741328 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3741328 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3741328' 00:16:51.951 killing process with pid 3741328 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3741328 00:16:51.951 [2024-05-15 15:54:50.479678] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:51.951 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3741328 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.210 15:54:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.774 15:54:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.774 00:16:54.774 real 0m27.862s 00:16:54.774 user 2m2.596s 00:16:54.774 sys 0m9.423s 00:16:54.774 15:54:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:54.774 15:54:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.774 ************************************ 00:16:54.774 END TEST nvmf_fio_target 00:16:54.774 ************************************ 00:16:54.774 15:54:52 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:54.774 15:54:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:54.774 15:54:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.774 15:54:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.774 ************************************ 00:16:54.774 START TEST nvmf_bdevio 00:16:54.774 ************************************ 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:54.774 * Looking for test storage... 00:16:54.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.774 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.775 15:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:01.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:01.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.338 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:01.339 Found net devices under 0000:af:00.0: cvl_0_0 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:01.339 Found net devices under 0000:af:00.1: cvl_0_1 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:17:01.339 00:17:01.339 --- 10.0.0.2 ping statistics --- 00:17:01.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.339 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:17:01.339 00:17:01.339 --- 10.0.0.1 ping statistics --- 00:17:01.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.339 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3749094 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3749094 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3749094 ']' 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.339 15:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:01.339 [2024-05-15 15:54:59.883236] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:01.339 [2024-05-15 15:54:59.883290] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.596 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.596 [2024-05-15 15:54:59.957837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.596 [2024-05-15 15:55:00.040147] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.596 [2024-05-15 15:55:00.040182] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.596 [2024-05-15 15:55:00.040198] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.596 [2024-05-15 15:55:00.040207] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.596 [2024-05-15 15:55:00.040214] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.596 [2024-05-15 15:55:00.040331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:01.596 [2024-05-15 15:55:00.040440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:01.596 [2024-05-15 15:55:00.040551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:01.596 [2024-05-15 15:55:00.040550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.158 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.158 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:17:02.158 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.158 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.158 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.414 [2024-05-15 15:55:00.738094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.414 Malloc0 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.414 [2024-05-15 15:55:00.792111] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:02.414 [2024-05-15 15:55:00.792384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.414 { 00:17:02.414 "params": { 00:17:02.414 "name": "Nvme$subsystem", 00:17:02.414 "trtype": "$TEST_TRANSPORT", 00:17:02.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.414 "adrfam": "ipv4", 00:17:02.414 "trsvcid": "$NVMF_PORT", 00:17:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.414 "hdgst": ${hdgst:-false}, 00:17:02.414 "ddgst": ${ddgst:-false} 00:17:02.414 }, 00:17:02.414 "method": "bdev_nvme_attach_controller" 00:17:02.414 } 00:17:02.414 EOF 00:17:02.414 )") 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:02.414 15:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.414 "params": { 00:17:02.414 "name": "Nvme1", 00:17:02.414 "trtype": "tcp", 00:17:02.414 "traddr": "10.0.0.2", 00:17:02.414 "adrfam": "ipv4", 00:17:02.414 "trsvcid": "4420", 00:17:02.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.414 "hdgst": false, 00:17:02.414 "ddgst": false 00:17:02.414 }, 00:17:02.414 "method": "bdev_nvme_attach_controller" 00:17:02.414 }' 00:17:02.415 [2024-05-15 15:55:00.844498] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:02.415 [2024-05-15 15:55:00.844552] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749377 ] 00:17:02.415 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.415 [2024-05-15 15:55:00.916076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.671 [2024-05-15 15:55:00.987863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.671 [2024-05-15 15:55:00.987958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.671 [2024-05-15 15:55:00.987961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.671 I/O targets: 00:17:02.671 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:02.671 00:17:02.671 00:17:02.671 CUnit - A unit testing framework for C - Version 2.1-3 00:17:02.671 http://cunit.sourceforge.net/ 00:17:02.671 00:17:02.671 00:17:02.671 Suite: bdevio tests on: Nvme1n1 00:17:02.671 Test: blockdev write read block ...passed 00:17:02.927 Test: blockdev write zeroes read block ...passed 00:17:02.927 Test: blockdev write zeroes read no split ...passed 00:17:02.927 Test: blockdev write zeroes read split ...passed 00:17:02.927 Test: blockdev write zeroes read split partial ...passed 00:17:02.927 Test: blockdev reset ...[2024-05-15 15:55:01.378140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:02.927 [2024-05-15 15:55:01.378206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb467b0 (9): Bad file descriptor 00:17:03.184 [2024-05-15 15:55:01.521054] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:03.184 passed 00:17:03.184 Test: blockdev write read 8 blocks ...passed 00:17:03.184 Test: blockdev write read size > 128k ...passed 00:17:03.184 Test: blockdev write read invalid size ...passed 00:17:03.184 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.184 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.184 Test: blockdev write read max offset ...passed 00:17:03.184 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.184 Test: blockdev writev readv 8 blocks ...passed 00:17:03.184 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.441 Test: blockdev writev readv block ...passed 00:17:03.441 Test: blockdev writev readv size > 128k ...passed 00:17:03.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.441 Test: blockdev comparev and writev ...[2024-05-15 15:55:01.790827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.790858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.790874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.790884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.791332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.791346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.791360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.791370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.791802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.791816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.791830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.791840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.792290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.792304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.792318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.441 [2024-05-15 15:55:01.792328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:03.441 passed 00:17:03.441 Test: blockdev nvme passthru rw ...passed 00:17:03.441 Test: blockdev nvme passthru vendor specific ...[2024-05-15 15:55:01.875985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.441 [2024-05-15 15:55:01.876002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.876338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.441 [2024-05-15 15:55:01.876351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.876674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.441 [2024-05-15 15:55:01.876686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:03.441 [2024-05-15 15:55:01.877013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.441 [2024-05-15 15:55:01.877026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:03.441 passed 00:17:03.441 Test: blockdev nvme admin passthru ...passed 00:17:03.441 Test: blockdev copy ...passed 00:17:03.441 00:17:03.441 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.441 suites 1 1 n/a 0 0 00:17:03.441 tests 23 23 23 0 0 00:17:03.441 asserts 152 152 152 0 n/a 00:17:03.441 00:17:03.441 Elapsed time = 1.497 seconds 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.697 rmmod nvme_tcp 00:17:03.697 rmmod nvme_fabrics 00:17:03.697 rmmod nvme_keyring 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3749094 ']' 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3749094 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3749094 ']' 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3749094 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3749094 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3749094' 00:17:03.697 killing process with pid 3749094 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3749094 00:17:03.697 [2024-05-15 15:55:02.236352] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:03.697 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3749094 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.955 15:55:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.488 15:55:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.488 00:17:06.488 real 0m11.680s 00:17:06.488 user 0m13.902s 00:17:06.488 sys 0m5.824s 00:17:06.488 15:55:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:06.488 15:55:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:06.488 ************************************ 00:17:06.488 END TEST nvmf_bdevio 00:17:06.488 ************************************ 00:17:06.488 15:55:04 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:06.488 15:55:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:06.488 15:55:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:06.488 15:55:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.488 ************************************ 00:17:06.488 START TEST nvmf_auth_target 00:17:06.488 ************************************ 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:06.488 * Looking for test storage... 00:17:06.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.488 15:55:04 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.489 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:13.051 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:13.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.051 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:13.052 Found net devices under 0000:af:00.0: cvl_0_0 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:13.052 Found net devices under 0000:af:00.1: cvl_0_1 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.052 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:17:13.052 00:17:13.052 --- 10.0.0.2 ping statistics --- 00:17:13.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.052 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:17:13.052 00:17:13.052 --- 10.0.0.1 ping statistics --- 00:17:13.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.052 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3753199 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3753199 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3753199 ']' 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:13.052 15:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.618 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:13.618 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:13.618 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.618 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.618 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=3753364 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=76b99d660df042bab4a2201532e4059c25b9d015344041f2 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.T3P 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 76b99d660df042bab4a2201532e4059c25b9d015344041f2 0 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 76b99d660df042bab4a2201532e4059c25b9d015344041f2 0 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=76b99d660df042bab4a2201532e4059c25b9d015344041f2 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.T3P 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.T3P 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.T3P 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4b81ca6fa052af03ba7cc50952780406 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jXN 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4b81ca6fa052af03ba7cc50952780406 1 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4b81ca6fa052af03ba7cc50952780406 1 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4b81ca6fa052af03ba7cc50952780406 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jXN 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jXN 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.jXN 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d4d4e631d384383546c7ff89d61e0623440f37ae189b1f16 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gHf 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d4d4e631d384383546c7ff89d61e0623440f37ae189b1f16 2 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d4d4e631d384383546c7ff89d61e0623440f37ae189b1f16 2 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d4d4e631d384383546c7ff89d61e0623440f37ae189b1f16 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gHf 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gHf 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.gHf 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1c05014c760f7403b0309519fd4c82bfe1d8e9ebf5ce8e8cbf0fead2e2a66bfb 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LLh 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1c05014c760f7403b0309519fd4c82bfe1d8e9ebf5ce8e8cbf0fead2e2a66bfb 3 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1c05014c760f7403b0309519fd4c82bfe1d8e9ebf5ce8e8cbf0fead2e2a66bfb 3 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.877 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.878 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1c05014c760f7403b0309519fd4c82bfe1d8e9ebf5ce8e8cbf0fead2e2a66bfb 00:17:13.878 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:13.878 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LLh 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LLh 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.LLh 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 3753199 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3753199 ']' 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 3753364 /var/tmp/host.sock 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3753364 ']' 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:14.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:14.136 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.T3P 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.T3P 00:17:14.394 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.T3P 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.jXN 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.jXN 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.jXN 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gHf 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gHf 00:17:14.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gHf 00:17:14.909 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:14.909 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LLh 00:17:14.909 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.909 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.909 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.909 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LLh 00:17:14.909 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LLh 00:17:15.167 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:15.167 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.167 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:15.167 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.167 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:15.425 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:15.425 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:15.682 { 00:17:15.682 "cntlid": 1, 00:17:15.682 "qid": 0, 00:17:15.682 "state": "enabled", 00:17:15.682 "listen_address": { 00:17:15.682 "trtype": "TCP", 00:17:15.682 "adrfam": "IPv4", 00:17:15.682 "traddr": "10.0.0.2", 00:17:15.682 "trsvcid": "4420" 00:17:15.682 }, 00:17:15.682 "peer_address": { 00:17:15.682 "trtype": "TCP", 00:17:15.682 "adrfam": "IPv4", 00:17:15.682 "traddr": "10.0.0.1", 00:17:15.682 "trsvcid": "32870" 00:17:15.682 }, 00:17:15.682 "auth": { 00:17:15.682 "state": "completed", 00:17:15.682 "digest": "sha256", 00:17:15.682 "dhgroup": "null" 00:17:15.682 } 00:17:15.682 } 00:17:15.682 ]' 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:15.682 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:15.940 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.940 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.940 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.940 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.504 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:16.761 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:17.019 00:17:17.019 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:17.019 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.019 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:17.275 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.275 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:17.276 { 00:17:17.276 "cntlid": 3, 00:17:17.276 "qid": 0, 00:17:17.276 "state": "enabled", 00:17:17.276 "listen_address": { 00:17:17.276 "trtype": "TCP", 00:17:17.276 "adrfam": "IPv4", 00:17:17.276 "traddr": "10.0.0.2", 00:17:17.276 "trsvcid": "4420" 00:17:17.276 }, 00:17:17.276 "peer_address": { 00:17:17.276 "trtype": "TCP", 00:17:17.276 "adrfam": "IPv4", 00:17:17.276 "traddr": "10.0.0.1", 00:17:17.276 "trsvcid": "32902" 00:17:17.276 }, 00:17:17.276 "auth": { 00:17:17.276 "state": "completed", 00:17:17.276 "digest": "sha256", 00:17:17.276 "dhgroup": "null" 00:17:17.276 } 00:17:17.276 } 00:17:17.276 ]' 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.276 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.580 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:18.146 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:18.403 00:17:18.403 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:18.403 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:18.403 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:18.661 { 00:17:18.661 "cntlid": 5, 00:17:18.661 "qid": 0, 00:17:18.661 "state": "enabled", 00:17:18.661 "listen_address": { 00:17:18.661 "trtype": "TCP", 00:17:18.661 "adrfam": "IPv4", 00:17:18.661 "traddr": "10.0.0.2", 00:17:18.661 "trsvcid": "4420" 00:17:18.661 }, 00:17:18.661 "peer_address": { 00:17:18.661 "trtype": "TCP", 00:17:18.661 "adrfam": "IPv4", 00:17:18.661 "traddr": "10.0.0.1", 00:17:18.661 "trsvcid": "32926" 00:17:18.661 }, 00:17:18.661 "auth": { 00:17:18.661 "state": "completed", 00:17:18.661 "digest": "sha256", 00:17:18.661 "dhgroup": "null" 00:17:18.661 } 00:17:18.661 } 00:17:18.661 ]' 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.661 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.919 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.484 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.742 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.742 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:20.000 { 00:17:20.000 "cntlid": 7, 00:17:20.000 "qid": 0, 00:17:20.000 "state": "enabled", 00:17:20.000 "listen_address": { 00:17:20.000 "trtype": "TCP", 00:17:20.000 "adrfam": "IPv4", 00:17:20.000 "traddr": "10.0.0.2", 00:17:20.000 "trsvcid": "4420" 00:17:20.000 }, 00:17:20.000 "peer_address": { 00:17:20.000 "trtype": "TCP", 00:17:20.000 "adrfam": "IPv4", 00:17:20.000 "traddr": "10.0.0.1", 00:17:20.000 "trsvcid": "39366" 00:17:20.000 }, 00:17:20.000 "auth": { 00:17:20.000 "state": "completed", 00:17:20.000 "digest": "sha256", 00:17:20.000 "dhgroup": "null" 00:17:20.000 } 00:17:20.000 } 00:17:20.000 ]' 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.000 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:20.258 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:20.258 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:20.258 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.258 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.258 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.258 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.825 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.082 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:17:21.082 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:21.082 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.082 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:21.082 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.082 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:21.083 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.083 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.083 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.083 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:21.083 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:21.340 00:17:21.340 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:21.340 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:21.340 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.605 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.605 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.605 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.605 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.605 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.605 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:21.605 { 00:17:21.605 "cntlid": 9, 00:17:21.605 "qid": 0, 00:17:21.605 "state": "enabled", 00:17:21.605 "listen_address": { 00:17:21.606 "trtype": "TCP", 00:17:21.606 "adrfam": "IPv4", 00:17:21.606 "traddr": "10.0.0.2", 00:17:21.606 "trsvcid": "4420" 00:17:21.606 }, 00:17:21.606 "peer_address": { 00:17:21.606 "trtype": "TCP", 00:17:21.606 "adrfam": "IPv4", 00:17:21.606 "traddr": "10.0.0.1", 00:17:21.606 "trsvcid": "39398" 00:17:21.606 }, 00:17:21.606 "auth": { 00:17:21.606 "state": "completed", 00:17:21.606 "digest": "sha256", 00:17:21.606 "dhgroup": "ffdhe2048" 00:17:21.606 } 00:17:21.606 } 00:17:21.606 ]' 00:17:21.606 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:21.606 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.606 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:21.606 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.606 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:21.606 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.606 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.606 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.868 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:22.433 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:22.691 00:17:22.691 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:22.691 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.691 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:22.949 { 00:17:22.949 "cntlid": 11, 00:17:22.949 "qid": 0, 00:17:22.949 "state": "enabled", 00:17:22.949 "listen_address": { 00:17:22.949 "trtype": "TCP", 00:17:22.949 "adrfam": "IPv4", 00:17:22.949 "traddr": "10.0.0.2", 00:17:22.949 "trsvcid": "4420" 00:17:22.949 }, 00:17:22.949 "peer_address": { 00:17:22.949 "trtype": "TCP", 00:17:22.949 "adrfam": "IPv4", 00:17:22.949 "traddr": "10.0.0.1", 00:17:22.949 "trsvcid": "39414" 00:17:22.949 }, 00:17:22.949 "auth": { 00:17:22.949 "state": "completed", 00:17:22.949 "digest": "sha256", 00:17:22.949 "dhgroup": "ffdhe2048" 00:17:22.949 } 00:17:22.949 } 00:17:22.949 ]' 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.949 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.208 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:17:23.773 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.774 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:23.774 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.774 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.774 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.774 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:23.774 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.774 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:24.032 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:24.290 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:24.290 { 00:17:24.290 "cntlid": 13, 00:17:24.290 "qid": 0, 00:17:24.290 "state": "enabled", 00:17:24.290 "listen_address": { 00:17:24.290 "trtype": "TCP", 00:17:24.290 "adrfam": "IPv4", 00:17:24.290 "traddr": "10.0.0.2", 00:17:24.290 "trsvcid": "4420" 00:17:24.290 }, 00:17:24.290 "peer_address": { 00:17:24.290 "trtype": "TCP", 00:17:24.290 "adrfam": "IPv4", 00:17:24.290 "traddr": "10.0.0.1", 00:17:24.290 "trsvcid": "39444" 00:17:24.290 }, 00:17:24.290 "auth": { 00:17:24.290 "state": "completed", 00:17:24.290 "digest": "sha256", 00:17:24.290 "dhgroup": "ffdhe2048" 00:17:24.290 } 00:17:24.290 } 00:17:24.290 ]' 00:17:24.290 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:24.548 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.548 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:24.548 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.548 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:24.548 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.548 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.548 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.807 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:17:25.373 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.374 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.632 00:17:25.632 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:25.632 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:25.632 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:25.890 { 00:17:25.890 "cntlid": 15, 00:17:25.890 "qid": 0, 00:17:25.890 "state": "enabled", 00:17:25.890 "listen_address": { 00:17:25.890 "trtype": "TCP", 00:17:25.890 "adrfam": "IPv4", 00:17:25.890 "traddr": "10.0.0.2", 00:17:25.890 "trsvcid": "4420" 00:17:25.890 }, 00:17:25.890 "peer_address": { 00:17:25.890 "trtype": "TCP", 00:17:25.890 "adrfam": "IPv4", 00:17:25.890 "traddr": "10.0.0.1", 00:17:25.890 "trsvcid": "39456" 00:17:25.890 }, 00:17:25.890 "auth": { 00:17:25.890 "state": "completed", 00:17:25.890 "digest": "sha256", 00:17:25.890 "dhgroup": "ffdhe2048" 00:17:25.890 } 00:17:25.890 } 00:17:25.890 ]' 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.890 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.148 15:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.714 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:26.972 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:27.230 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:27.230 { 00:17:27.230 "cntlid": 17, 00:17:27.230 "qid": 0, 00:17:27.230 "state": "enabled", 00:17:27.230 "listen_address": { 00:17:27.230 "trtype": "TCP", 00:17:27.230 "adrfam": "IPv4", 00:17:27.230 "traddr": "10.0.0.2", 00:17:27.230 "trsvcid": "4420" 00:17:27.230 }, 00:17:27.230 "peer_address": { 00:17:27.230 "trtype": "TCP", 00:17:27.230 "adrfam": "IPv4", 00:17:27.230 "traddr": "10.0.0.1", 00:17:27.230 "trsvcid": "39478" 00:17:27.230 }, 00:17:27.230 "auth": { 00:17:27.230 "state": "completed", 00:17:27.230 "digest": "sha256", 00:17:27.230 "dhgroup": "ffdhe3072" 00:17:27.230 } 00:17:27.230 } 00:17:27.230 ]' 00:17:27.230 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:27.487 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.487 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:27.487 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.487 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:27.487 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.487 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.487 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.745 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:28.310 15:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:28.567 00:17:28.567 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:28.568 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.568 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:28.826 { 00:17:28.826 "cntlid": 19, 00:17:28.826 "qid": 0, 00:17:28.826 "state": "enabled", 00:17:28.826 "listen_address": { 00:17:28.826 "trtype": "TCP", 00:17:28.826 "adrfam": "IPv4", 00:17:28.826 "traddr": "10.0.0.2", 00:17:28.826 "trsvcid": "4420" 00:17:28.826 }, 00:17:28.826 "peer_address": { 00:17:28.826 "trtype": "TCP", 00:17:28.826 "adrfam": "IPv4", 00:17:28.826 "traddr": "10.0.0.1", 00:17:28.826 "trsvcid": "39498" 00:17:28.826 }, 00:17:28.826 "auth": { 00:17:28.826 "state": "completed", 00:17:28.826 "digest": "sha256", 00:17:28.826 "dhgroup": "ffdhe3072" 00:17:28.826 } 00:17:28.826 } 00:17:28.826 ]' 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.826 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.083 15:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:17:29.669 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.670 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:29.670 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.670 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.670 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.670 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:29.670 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.670 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.949 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.950 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.950 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:30.208 { 00:17:30.208 "cntlid": 21, 00:17:30.208 "qid": 0, 00:17:30.208 "state": "enabled", 00:17:30.208 "listen_address": { 00:17:30.208 "trtype": "TCP", 00:17:30.208 "adrfam": "IPv4", 00:17:30.208 "traddr": "10.0.0.2", 00:17:30.208 "trsvcid": "4420" 00:17:30.208 }, 00:17:30.208 "peer_address": { 00:17:30.208 "trtype": "TCP", 00:17:30.208 "adrfam": "IPv4", 00:17:30.208 "traddr": "10.0.0.1", 00:17:30.208 "trsvcid": "49818" 00:17:30.208 }, 00:17:30.208 "auth": { 00:17:30.208 "state": "completed", 00:17:30.208 "digest": "sha256", 00:17:30.208 "dhgroup": "ffdhe3072" 00:17:30.208 } 00:17:30.208 } 00:17:30.208 ]' 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.208 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:30.466 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.466 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:30.466 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.466 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.466 15:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.466 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.032 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.290 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.548 00:17:31.548 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:31.548 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:31.548 15:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:31.806 { 00:17:31.806 "cntlid": 23, 00:17:31.806 "qid": 0, 00:17:31.806 "state": "enabled", 00:17:31.806 "listen_address": { 00:17:31.806 "trtype": "TCP", 00:17:31.806 "adrfam": "IPv4", 00:17:31.806 "traddr": "10.0.0.2", 00:17:31.806 "trsvcid": "4420" 00:17:31.806 }, 00:17:31.806 "peer_address": { 00:17:31.806 "trtype": "TCP", 00:17:31.806 "adrfam": "IPv4", 00:17:31.806 "traddr": "10.0.0.1", 00:17:31.806 "trsvcid": "49848" 00:17:31.806 }, 00:17:31.806 "auth": { 00:17:31.806 "state": "completed", 00:17:31.806 "digest": "sha256", 00:17:31.806 "dhgroup": "ffdhe3072" 00:17:31.806 } 00:17:31.806 } 00:17:31.806 ]' 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.806 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.064 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:17:32.630 15:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.630 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:32.888 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:32.888 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:33.147 { 00:17:33.147 "cntlid": 25, 00:17:33.147 "qid": 0, 00:17:33.147 "state": "enabled", 00:17:33.147 "listen_address": { 00:17:33.147 "trtype": "TCP", 00:17:33.147 "adrfam": "IPv4", 00:17:33.147 "traddr": "10.0.0.2", 00:17:33.147 "trsvcid": "4420" 00:17:33.147 }, 00:17:33.147 "peer_address": { 00:17:33.147 "trtype": "TCP", 00:17:33.147 "adrfam": "IPv4", 00:17:33.147 "traddr": "10.0.0.1", 00:17:33.147 "trsvcid": "49888" 00:17:33.147 }, 00:17:33.147 "auth": { 00:17:33.147 "state": "completed", 00:17:33.147 "digest": "sha256", 00:17:33.147 "dhgroup": "ffdhe4096" 00:17:33.147 } 00:17:33.147 } 00:17:33.147 ]' 00:17:33.147 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.406 15:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:33.972 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:34.230 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:34.487 00:17:34.487 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:34.487 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:34.487 15:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:34.745 { 00:17:34.745 "cntlid": 27, 00:17:34.745 "qid": 0, 00:17:34.745 "state": "enabled", 00:17:34.745 "listen_address": { 00:17:34.745 "trtype": "TCP", 00:17:34.745 "adrfam": "IPv4", 00:17:34.745 "traddr": "10.0.0.2", 00:17:34.745 "trsvcid": "4420" 00:17:34.745 }, 00:17:34.745 "peer_address": { 00:17:34.745 "trtype": "TCP", 00:17:34.745 "adrfam": "IPv4", 00:17:34.745 "traddr": "10.0.0.1", 00:17:34.745 "trsvcid": "49908" 00:17:34.745 }, 00:17:34.745 "auth": { 00:17:34.745 "state": "completed", 00:17:34.745 "digest": "sha256", 00:17:34.745 "dhgroup": "ffdhe4096" 00:17:34.745 } 00:17:34.745 } 00:17:34.745 ]' 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.745 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.004 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:17:35.568 15:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.568 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:35.568 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.568 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.568 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.568 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:35.568 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.569 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:35.828 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:36.087 00:17:36.087 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:36.087 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:36.087 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.087 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.344 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.344 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.344 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.344 15:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.344 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:36.344 { 00:17:36.344 "cntlid": 29, 00:17:36.344 "qid": 0, 00:17:36.344 "state": "enabled", 00:17:36.344 "listen_address": { 00:17:36.344 "trtype": "TCP", 00:17:36.344 "adrfam": "IPv4", 00:17:36.345 "traddr": "10.0.0.2", 00:17:36.345 "trsvcid": "4420" 00:17:36.345 }, 00:17:36.345 "peer_address": { 00:17:36.345 "trtype": "TCP", 00:17:36.345 "adrfam": "IPv4", 00:17:36.345 "traddr": "10.0.0.1", 00:17:36.345 "trsvcid": "49932" 00:17:36.345 }, 00:17:36.345 "auth": { 00:17:36.345 "state": "completed", 00:17:36.345 "digest": "sha256", 00:17:36.345 "dhgroup": "ffdhe4096" 00:17:36.345 } 00:17:36.345 } 00:17:36.345 ]' 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.345 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.603 15:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.169 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.427 00:17:37.427 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:37.427 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.427 15:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:37.686 { 00:17:37.686 "cntlid": 31, 00:17:37.686 "qid": 0, 00:17:37.686 "state": "enabled", 00:17:37.686 "listen_address": { 00:17:37.686 "trtype": "TCP", 00:17:37.686 "adrfam": "IPv4", 00:17:37.686 "traddr": "10.0.0.2", 00:17:37.686 "trsvcid": "4420" 00:17:37.686 }, 00:17:37.686 "peer_address": { 00:17:37.686 "trtype": "TCP", 00:17:37.686 "adrfam": "IPv4", 00:17:37.686 "traddr": "10.0.0.1", 00:17:37.686 "trsvcid": "49966" 00:17:37.686 }, 00:17:37.686 "auth": { 00:17:37.686 "state": "completed", 00:17:37.686 "digest": "sha256", 00:17:37.686 "dhgroup": "ffdhe4096" 00:17:37.686 } 00:17:37.686 } 00:17:37.686 ]' 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.686 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:37.944 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.944 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.944 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.945 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:17:38.512 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.512 15:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:38.512 15:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.512 15:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.512 15:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.512 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.513 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:38.513 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:38.513 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:38.771 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:39.029 00:17:39.029 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:39.029 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:39.029 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:39.287 { 00:17:39.287 "cntlid": 33, 00:17:39.287 "qid": 0, 00:17:39.287 "state": "enabled", 00:17:39.287 "listen_address": { 00:17:39.287 "trtype": "TCP", 00:17:39.287 "adrfam": "IPv4", 00:17:39.287 "traddr": "10.0.0.2", 00:17:39.287 "trsvcid": "4420" 00:17:39.287 }, 00:17:39.287 "peer_address": { 00:17:39.287 "trtype": "TCP", 00:17:39.287 "adrfam": "IPv4", 00:17:39.287 "traddr": "10.0.0.1", 00:17:39.287 "trsvcid": "50006" 00:17:39.287 }, 00:17:39.287 "auth": { 00:17:39.287 "state": "completed", 00:17:39.287 "digest": "sha256", 00:17:39.287 "dhgroup": "ffdhe6144" 00:17:39.287 } 00:17:39.287 } 00:17:39.287 ]' 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.287 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:39.546 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.546 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.546 15:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.546 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:40.111 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:40.369 15:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:40.626 00:17:40.626 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:40.626 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.626 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:40.883 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.883 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.883 15:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.883 15:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.883 15:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.883 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:40.883 { 00:17:40.883 "cntlid": 35, 00:17:40.883 "qid": 0, 00:17:40.883 "state": "enabled", 00:17:40.883 "listen_address": { 00:17:40.883 "trtype": "TCP", 00:17:40.883 "adrfam": "IPv4", 00:17:40.883 "traddr": "10.0.0.2", 00:17:40.883 "trsvcid": "4420" 00:17:40.883 }, 00:17:40.883 "peer_address": { 00:17:40.883 "trtype": "TCP", 00:17:40.883 "adrfam": "IPv4", 00:17:40.883 "traddr": "10.0.0.1", 00:17:40.883 "trsvcid": "42952" 00:17:40.883 }, 00:17:40.883 "auth": { 00:17:40.883 "state": "completed", 00:17:40.884 "digest": "sha256", 00:17:40.884 "dhgroup": "ffdhe6144" 00:17:40.884 } 00:17:40.884 } 00:17:40.884 ]' 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.884 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.141 15:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.706 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:41.964 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:42.222 00:17:42.222 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:42.222 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:42.222 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:42.480 { 00:17:42.480 "cntlid": 37, 00:17:42.480 "qid": 0, 00:17:42.480 "state": "enabled", 00:17:42.480 "listen_address": { 00:17:42.480 "trtype": "TCP", 00:17:42.480 "adrfam": "IPv4", 00:17:42.480 "traddr": "10.0.0.2", 00:17:42.480 "trsvcid": "4420" 00:17:42.480 }, 00:17:42.480 "peer_address": { 00:17:42.480 "trtype": "TCP", 00:17:42.480 "adrfam": "IPv4", 00:17:42.480 "traddr": "10.0.0.1", 00:17:42.480 "trsvcid": "42970" 00:17:42.480 }, 00:17:42.480 "auth": { 00:17:42.480 "state": "completed", 00:17:42.480 "digest": "sha256", 00:17:42.480 "dhgroup": "ffdhe6144" 00:17:42.480 } 00:17:42.480 } 00:17:42.480 ]' 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.480 15:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.738 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.330 15:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.896 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:43.896 { 00:17:43.896 "cntlid": 39, 00:17:43.896 "qid": 0, 00:17:43.896 "state": "enabled", 00:17:43.896 "listen_address": { 00:17:43.896 "trtype": "TCP", 00:17:43.896 "adrfam": "IPv4", 00:17:43.896 "traddr": "10.0.0.2", 00:17:43.896 "trsvcid": "4420" 00:17:43.896 }, 00:17:43.896 "peer_address": { 00:17:43.896 "trtype": "TCP", 00:17:43.896 "adrfam": "IPv4", 00:17:43.896 "traddr": "10.0.0.1", 00:17:43.896 "trsvcid": "42988" 00:17:43.896 }, 00:17:43.896 "auth": { 00:17:43.896 "state": "completed", 00:17:43.896 "digest": "sha256", 00:17:43.896 "dhgroup": "ffdhe6144" 00:17:43.896 } 00:17:43.896 } 00:17:43.896 ]' 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.896 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:44.154 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.154 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:44.154 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.154 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.154 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.154 15:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.720 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 15:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.980 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:44.980 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:45.546 00:17:45.546 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:45.546 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.546 15:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:45.546 { 00:17:45.546 "cntlid": 41, 00:17:45.546 "qid": 0, 00:17:45.546 "state": "enabled", 00:17:45.546 "listen_address": { 00:17:45.546 "trtype": "TCP", 00:17:45.546 "adrfam": "IPv4", 00:17:45.546 "traddr": "10.0.0.2", 00:17:45.546 "trsvcid": "4420" 00:17:45.546 }, 00:17:45.546 "peer_address": { 00:17:45.546 "trtype": "TCP", 00:17:45.546 "adrfam": "IPv4", 00:17:45.546 "traddr": "10.0.0.1", 00:17:45.546 "trsvcid": "43004" 00:17:45.546 }, 00:17:45.546 "auth": { 00:17:45.546 "state": "completed", 00:17:45.546 "digest": "sha256", 00:17:45.546 "dhgroup": "ffdhe8192" 00:17:45.546 } 00:17:45.546 } 00:17:45.546 ]' 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.546 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:45.804 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.804 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:45.804 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.804 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.804 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.804 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.370 15:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:46.628 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:47.194 00:17:47.194 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:47.194 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.194 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:47.194 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.194 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.194 15:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.194 15:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:47.452 { 00:17:47.452 "cntlid": 43, 00:17:47.452 "qid": 0, 00:17:47.452 "state": "enabled", 00:17:47.452 "listen_address": { 00:17:47.452 "trtype": "TCP", 00:17:47.452 "adrfam": "IPv4", 00:17:47.452 "traddr": "10.0.0.2", 00:17:47.452 "trsvcid": "4420" 00:17:47.452 }, 00:17:47.452 "peer_address": { 00:17:47.452 "trtype": "TCP", 00:17:47.452 "adrfam": "IPv4", 00:17:47.452 "traddr": "10.0.0.1", 00:17:47.452 "trsvcid": "43028" 00:17:47.452 }, 00:17:47.452 "auth": { 00:17:47.452 "state": "completed", 00:17:47.452 "digest": "sha256", 00:17:47.452 "dhgroup": "ffdhe8192" 00:17:47.452 } 00:17:47.452 } 00:17:47.452 ]' 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.452 15:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.710 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:48.276 15:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:48.841 00:17:48.841 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:48.841 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.841 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:49.098 { 00:17:49.098 "cntlid": 45, 00:17:49.098 "qid": 0, 00:17:49.098 "state": "enabled", 00:17:49.098 "listen_address": { 00:17:49.098 "trtype": "TCP", 00:17:49.098 "adrfam": "IPv4", 00:17:49.098 "traddr": "10.0.0.2", 00:17:49.098 "trsvcid": "4420" 00:17:49.098 }, 00:17:49.098 "peer_address": { 00:17:49.098 "trtype": "TCP", 00:17:49.098 "adrfam": "IPv4", 00:17:49.098 "traddr": "10.0.0.1", 00:17:49.098 "trsvcid": "43062" 00:17:49.098 }, 00:17:49.098 "auth": { 00:17:49.098 "state": "completed", 00:17:49.098 "digest": "sha256", 00:17:49.098 "dhgroup": "ffdhe8192" 00:17:49.098 } 00:17:49.098 } 00:17:49.098 ]' 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.098 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.356 15:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.922 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.488 00:17:50.488 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:50.488 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:50.488 15:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:50.746 { 00:17:50.746 "cntlid": 47, 00:17:50.746 "qid": 0, 00:17:50.746 "state": "enabled", 00:17:50.746 "listen_address": { 00:17:50.746 "trtype": "TCP", 00:17:50.746 "adrfam": "IPv4", 00:17:50.746 "traddr": "10.0.0.2", 00:17:50.746 "trsvcid": "4420" 00:17:50.746 }, 00:17:50.746 "peer_address": { 00:17:50.746 "trtype": "TCP", 00:17:50.746 "adrfam": "IPv4", 00:17:50.746 "traddr": "10.0.0.1", 00:17:50.746 "trsvcid": "51138" 00:17:50.746 }, 00:17:50.746 "auth": { 00:17:50.746 "state": "completed", 00:17:50.746 "digest": "sha256", 00:17:50.746 "dhgroup": "ffdhe8192" 00:17:50.746 } 00:17:50.746 } 00:17:50.746 ]' 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.746 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.005 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:51.571 15:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:51.829 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:51.829 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:52.087 { 00:17:52.087 "cntlid": 49, 00:17:52.087 "qid": 0, 00:17:52.087 "state": "enabled", 00:17:52.087 "listen_address": { 00:17:52.087 "trtype": "TCP", 00:17:52.087 "adrfam": "IPv4", 00:17:52.087 "traddr": "10.0.0.2", 00:17:52.087 "trsvcid": "4420" 00:17:52.087 }, 00:17:52.087 "peer_address": { 00:17:52.087 "trtype": "TCP", 00:17:52.087 "adrfam": "IPv4", 00:17:52.087 "traddr": "10.0.0.1", 00:17:52.087 "trsvcid": "51166" 00:17:52.087 }, 00:17:52.087 "auth": { 00:17:52.087 "state": "completed", 00:17:52.087 "digest": "sha384", 00:17:52.087 "dhgroup": "null" 00:17:52.087 } 00:17:52.087 } 00:17:52.087 ]' 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.087 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:52.344 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:52.344 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:52.344 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.344 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.344 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.344 15:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:52.921 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:53.178 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:53.436 00:17:53.436 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:53.436 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:53.436 15:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:53.693 { 00:17:53.693 "cntlid": 51, 00:17:53.693 "qid": 0, 00:17:53.693 "state": "enabled", 00:17:53.693 "listen_address": { 00:17:53.693 "trtype": "TCP", 00:17:53.693 "adrfam": "IPv4", 00:17:53.693 "traddr": "10.0.0.2", 00:17:53.693 "trsvcid": "4420" 00:17:53.693 }, 00:17:53.693 "peer_address": { 00:17:53.693 "trtype": "TCP", 00:17:53.693 "adrfam": "IPv4", 00:17:53.693 "traddr": "10.0.0.1", 00:17:53.693 "trsvcid": "51194" 00:17:53.693 }, 00:17:53.693 "auth": { 00:17:53.693 "state": "completed", 00:17:53.693 "digest": "sha384", 00:17:53.693 "dhgroup": "null" 00:17:53.693 } 00:17:53.693 } 00:17:53.693 ]' 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.693 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.951 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:54.517 15:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:54.517 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:54.775 00:17:54.775 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:54.775 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:54.775 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:55.032 { 00:17:55.032 "cntlid": 53, 00:17:55.032 "qid": 0, 00:17:55.032 "state": "enabled", 00:17:55.032 "listen_address": { 00:17:55.032 "trtype": "TCP", 00:17:55.032 "adrfam": "IPv4", 00:17:55.032 "traddr": "10.0.0.2", 00:17:55.032 "trsvcid": "4420" 00:17:55.032 }, 00:17:55.032 "peer_address": { 00:17:55.032 "trtype": "TCP", 00:17:55.032 "adrfam": "IPv4", 00:17:55.032 "traddr": "10.0.0.1", 00:17:55.032 "trsvcid": "51228" 00:17:55.032 }, 00:17:55.032 "auth": { 00:17:55.032 "state": "completed", 00:17:55.032 "digest": "sha384", 00:17:55.032 "dhgroup": "null" 00:17:55.032 } 00:17:55.032 } 00:17:55.032 ]' 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:55.032 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:55.290 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.290 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.290 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.290 15:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:55.856 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.153 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.411 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:56.411 { 00:17:56.411 "cntlid": 55, 00:17:56.411 "qid": 0, 00:17:56.411 "state": "enabled", 00:17:56.411 "listen_address": { 00:17:56.411 "trtype": "TCP", 00:17:56.411 "adrfam": "IPv4", 00:17:56.411 "traddr": "10.0.0.2", 00:17:56.411 "trsvcid": "4420" 00:17:56.411 }, 00:17:56.411 "peer_address": { 00:17:56.411 "trtype": "TCP", 00:17:56.411 "adrfam": "IPv4", 00:17:56.411 "traddr": "10.0.0.1", 00:17:56.411 "trsvcid": "51262" 00:17:56.411 }, 00:17:56.411 "auth": { 00:17:56.411 "state": "completed", 00:17:56.411 "digest": "sha384", 00:17:56.411 "dhgroup": "null" 00:17:56.411 } 00:17:56.411 } 00:17:56.411 ]' 00:17:56.411 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:56.669 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.669 15:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:56.669 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:56.669 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:56.669 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.669 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.669 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.927 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:57.495 15:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:57.753 00:17:57.753 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:57.753 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:57.754 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:58.012 { 00:17:58.012 "cntlid": 57, 00:17:58.012 "qid": 0, 00:17:58.012 "state": "enabled", 00:17:58.012 "listen_address": { 00:17:58.012 "trtype": "TCP", 00:17:58.012 "adrfam": "IPv4", 00:17:58.012 "traddr": "10.0.0.2", 00:17:58.012 "trsvcid": "4420" 00:17:58.012 }, 00:17:58.012 "peer_address": { 00:17:58.012 "trtype": "TCP", 00:17:58.012 "adrfam": "IPv4", 00:17:58.012 "traddr": "10.0.0.1", 00:17:58.012 "trsvcid": "51290" 00:17:58.012 }, 00:17:58.012 "auth": { 00:17:58.012 "state": "completed", 00:17:58.012 "digest": "sha384", 00:17:58.012 "dhgroup": "ffdhe2048" 00:17:58.012 } 00:17:58.012 } 00:17:58.012 ]' 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.012 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.013 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.272 15:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:58.840 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:59.099 00:17:59.099 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:59.099 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:59.099 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:59.357 { 00:17:59.357 "cntlid": 59, 00:17:59.357 "qid": 0, 00:17:59.357 "state": "enabled", 00:17:59.357 "listen_address": { 00:17:59.357 "trtype": "TCP", 00:17:59.357 "adrfam": "IPv4", 00:17:59.357 "traddr": "10.0.0.2", 00:17:59.357 "trsvcid": "4420" 00:17:59.357 }, 00:17:59.357 "peer_address": { 00:17:59.357 "trtype": "TCP", 00:17:59.357 "adrfam": "IPv4", 00:17:59.357 "traddr": "10.0.0.1", 00:17:59.357 "trsvcid": "51310" 00:17:59.357 }, 00:17:59.357 "auth": { 00:17:59.357 "state": "completed", 00:17:59.357 "digest": "sha384", 00:17:59.357 "dhgroup": "ffdhe2048" 00:17:59.357 } 00:17:59.357 } 00:17:59.357 ]' 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.357 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:59.617 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.617 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.617 15:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.617 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.183 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:00.443 15:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:00.703 00:18:00.703 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:00.703 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:00.703 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:00.962 { 00:18:00.962 "cntlid": 61, 00:18:00.962 "qid": 0, 00:18:00.962 "state": "enabled", 00:18:00.962 "listen_address": { 00:18:00.962 "trtype": "TCP", 00:18:00.962 "adrfam": "IPv4", 00:18:00.962 "traddr": "10.0.0.2", 00:18:00.962 "trsvcid": "4420" 00:18:00.962 }, 00:18:00.962 "peer_address": { 00:18:00.962 "trtype": "TCP", 00:18:00.962 "adrfam": "IPv4", 00:18:00.962 "traddr": "10.0.0.1", 00:18:00.962 "trsvcid": "33674" 00:18:00.962 }, 00:18:00.962 "auth": { 00:18:00.962 "state": "completed", 00:18:00.962 "digest": "sha384", 00:18:00.962 "dhgroup": "ffdhe2048" 00:18:00.962 } 00:18:00.962 } 00:18:00.962 ]' 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.962 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.221 15:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:01.788 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.789 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:01.789 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.789 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.789 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.789 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.789 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.047 00:18:02.047 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:02.047 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:02.047 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:02.308 { 00:18:02.308 "cntlid": 63, 00:18:02.308 "qid": 0, 00:18:02.308 "state": "enabled", 00:18:02.308 "listen_address": { 00:18:02.308 "trtype": "TCP", 00:18:02.308 "adrfam": "IPv4", 00:18:02.308 "traddr": "10.0.0.2", 00:18:02.308 "trsvcid": "4420" 00:18:02.308 }, 00:18:02.308 "peer_address": { 00:18:02.308 "trtype": "TCP", 00:18:02.308 "adrfam": "IPv4", 00:18:02.308 "traddr": "10.0.0.1", 00:18:02.308 "trsvcid": "33704" 00:18:02.308 }, 00:18:02.308 "auth": { 00:18:02.308 "state": "completed", 00:18:02.308 "digest": "sha384", 00:18:02.308 "dhgroup": "ffdhe2048" 00:18:02.308 } 00:18:02.308 } 00:18:02.308 ]' 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.308 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:02.567 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.567 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.567 15:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.567 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.135 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:03.395 15:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:03.654 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:03.654 { 00:18:03.654 "cntlid": 65, 00:18:03.654 "qid": 0, 00:18:03.654 "state": "enabled", 00:18:03.654 "listen_address": { 00:18:03.654 "trtype": "TCP", 00:18:03.654 "adrfam": "IPv4", 00:18:03.654 "traddr": "10.0.0.2", 00:18:03.654 "trsvcid": "4420" 00:18:03.654 }, 00:18:03.654 "peer_address": { 00:18:03.654 "trtype": "TCP", 00:18:03.654 "adrfam": "IPv4", 00:18:03.654 "traddr": "10.0.0.1", 00:18:03.654 "trsvcid": "33716" 00:18:03.654 }, 00:18:03.654 "auth": { 00:18:03.654 "state": "completed", 00:18:03.654 "digest": "sha384", 00:18:03.654 "dhgroup": "ffdhe3072" 00:18:03.654 } 00:18:03.654 } 00:18:03.654 ]' 00:18:03.654 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:03.912 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.913 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:03.913 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.913 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:03.913 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.913 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.913 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.171 15:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:04.739 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:04.997 00:18:04.997 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:04.997 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.997 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:05.257 { 00:18:05.257 "cntlid": 67, 00:18:05.257 "qid": 0, 00:18:05.257 "state": "enabled", 00:18:05.257 "listen_address": { 00:18:05.257 "trtype": "TCP", 00:18:05.257 "adrfam": "IPv4", 00:18:05.257 "traddr": "10.0.0.2", 00:18:05.257 "trsvcid": "4420" 00:18:05.257 }, 00:18:05.257 "peer_address": { 00:18:05.257 "trtype": "TCP", 00:18:05.257 "adrfam": "IPv4", 00:18:05.257 "traddr": "10.0.0.1", 00:18:05.257 "trsvcid": "33742" 00:18:05.257 }, 00:18:05.257 "auth": { 00:18:05.257 "state": "completed", 00:18:05.257 "digest": "sha384", 00:18:05.257 "dhgroup": "ffdhe3072" 00:18:05.257 } 00:18:05.257 } 00:18:05.257 ]' 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.257 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.515 15:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:06.083 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.344 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.344 00:18:06.603 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:06.603 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:06.603 15:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:06.603 { 00:18:06.603 "cntlid": 69, 00:18:06.603 "qid": 0, 00:18:06.603 "state": "enabled", 00:18:06.603 "listen_address": { 00:18:06.603 "trtype": "TCP", 00:18:06.603 "adrfam": "IPv4", 00:18:06.603 "traddr": "10.0.0.2", 00:18:06.603 "trsvcid": "4420" 00:18:06.603 }, 00:18:06.603 "peer_address": { 00:18:06.603 "trtype": "TCP", 00:18:06.603 "adrfam": "IPv4", 00:18:06.603 "traddr": "10.0.0.1", 00:18:06.603 "trsvcid": "33784" 00:18:06.603 }, 00:18:06.603 "auth": { 00:18:06.603 "state": "completed", 00:18:06.603 "digest": "sha384", 00:18:06.603 "dhgroup": "ffdhe3072" 00:18:06.603 } 00:18:06.603 } 00:18:06.603 ]' 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.603 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:06.862 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.862 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:06.862 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.862 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.862 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.862 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.430 15:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.690 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.949 00:18:07.949 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:07.949 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:07.949 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.208 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.208 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:08.209 { 00:18:08.209 "cntlid": 71, 00:18:08.209 "qid": 0, 00:18:08.209 "state": "enabled", 00:18:08.209 "listen_address": { 00:18:08.209 "trtype": "TCP", 00:18:08.209 "adrfam": "IPv4", 00:18:08.209 "traddr": "10.0.0.2", 00:18:08.209 "trsvcid": "4420" 00:18:08.209 }, 00:18:08.209 "peer_address": { 00:18:08.209 "trtype": "TCP", 00:18:08.209 "adrfam": "IPv4", 00:18:08.209 "traddr": "10.0.0.1", 00:18:08.209 "trsvcid": "33820" 00:18:08.209 }, 00:18:08.209 "auth": { 00:18:08.209 "state": "completed", 00:18:08.209 "digest": "sha384", 00:18:08.209 "dhgroup": "ffdhe3072" 00:18:08.209 } 00:18:08.209 } 00:18:08.209 ]' 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.209 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.468 15:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:09.072 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:09.346 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.346 15:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:09.605 { 00:18:09.605 "cntlid": 73, 00:18:09.605 "qid": 0, 00:18:09.605 "state": "enabled", 00:18:09.605 "listen_address": { 00:18:09.605 "trtype": "TCP", 00:18:09.605 "adrfam": "IPv4", 00:18:09.605 "traddr": "10.0.0.2", 00:18:09.605 "trsvcid": "4420" 00:18:09.605 }, 00:18:09.605 "peer_address": { 00:18:09.605 "trtype": "TCP", 00:18:09.605 "adrfam": "IPv4", 00:18:09.605 "traddr": "10.0.0.1", 00:18:09.605 "trsvcid": "32992" 00:18:09.605 }, 00:18:09.605 "auth": { 00:18:09.605 "state": "completed", 00:18:09.605 "digest": "sha384", 00:18:09.605 "dhgroup": "ffdhe4096" 00:18:09.605 } 00:18:09.605 } 00:18:09.605 ]' 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.605 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:09.864 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.864 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:09.864 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.864 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.864 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.864 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.432 15:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:10.691 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:10.951 00:18:10.951 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:10.951 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:10.951 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:11.210 { 00:18:11.210 "cntlid": 75, 00:18:11.210 "qid": 0, 00:18:11.210 "state": "enabled", 00:18:11.210 "listen_address": { 00:18:11.210 "trtype": "TCP", 00:18:11.210 "adrfam": "IPv4", 00:18:11.210 "traddr": "10.0.0.2", 00:18:11.210 "trsvcid": "4420" 00:18:11.210 }, 00:18:11.210 "peer_address": { 00:18:11.210 "trtype": "TCP", 00:18:11.210 "adrfam": "IPv4", 00:18:11.210 "traddr": "10.0.0.1", 00:18:11.210 "trsvcid": "33016" 00:18:11.210 }, 00:18:11.210 "auth": { 00:18:11.210 "state": "completed", 00:18:11.210 "digest": "sha384", 00:18:11.210 "dhgroup": "ffdhe4096" 00:18:11.210 } 00:18:11.210 } 00:18:11.210 ]' 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.210 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.469 15:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:12.037 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:12.296 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:12.556 00:18:12.556 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:12.556 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:12.556 15:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.556 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.556 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.556 15:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.556 15:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:12.815 { 00:18:12.815 "cntlid": 77, 00:18:12.815 "qid": 0, 00:18:12.815 "state": "enabled", 00:18:12.815 "listen_address": { 00:18:12.815 "trtype": "TCP", 00:18:12.815 "adrfam": "IPv4", 00:18:12.815 "traddr": "10.0.0.2", 00:18:12.815 "trsvcid": "4420" 00:18:12.815 }, 00:18:12.815 "peer_address": { 00:18:12.815 "trtype": "TCP", 00:18:12.815 "adrfam": "IPv4", 00:18:12.815 "traddr": "10.0.0.1", 00:18:12.815 "trsvcid": "33046" 00:18:12.815 }, 00:18:12.815 "auth": { 00:18:12.815 "state": "completed", 00:18:12.815 "digest": "sha384", 00:18:12.815 "dhgroup": "ffdhe4096" 00:18:12.815 } 00:18:12.815 } 00:18:12.815 ]' 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.815 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.074 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:13.642 15:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.642 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.901 00:18:13.901 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:13.901 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:13.901 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:14.160 { 00:18:14.160 "cntlid": 79, 00:18:14.160 "qid": 0, 00:18:14.160 "state": "enabled", 00:18:14.160 "listen_address": { 00:18:14.160 "trtype": "TCP", 00:18:14.160 "adrfam": "IPv4", 00:18:14.160 "traddr": "10.0.0.2", 00:18:14.160 "trsvcid": "4420" 00:18:14.160 }, 00:18:14.160 "peer_address": { 00:18:14.160 "trtype": "TCP", 00:18:14.160 "adrfam": "IPv4", 00:18:14.160 "traddr": "10.0.0.1", 00:18:14.160 "trsvcid": "33078" 00:18:14.160 }, 00:18:14.160 "auth": { 00:18:14.160 "state": "completed", 00:18:14.160 "digest": "sha384", 00:18:14.160 "dhgroup": "ffdhe4096" 00:18:14.160 } 00:18:14.160 } 00:18:14.160 ]' 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.160 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:14.419 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.419 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.419 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.419 15:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:14.992 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:15.250 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:15.509 00:18:15.509 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:15.509 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:15.509 15:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:15.769 { 00:18:15.769 "cntlid": 81, 00:18:15.769 "qid": 0, 00:18:15.769 "state": "enabled", 00:18:15.769 "listen_address": { 00:18:15.769 "trtype": "TCP", 00:18:15.769 "adrfam": "IPv4", 00:18:15.769 "traddr": "10.0.0.2", 00:18:15.769 "trsvcid": "4420" 00:18:15.769 }, 00:18:15.769 "peer_address": { 00:18:15.769 "trtype": "TCP", 00:18:15.769 "adrfam": "IPv4", 00:18:15.769 "traddr": "10.0.0.1", 00:18:15.769 "trsvcid": "33100" 00:18:15.769 }, 00:18:15.769 "auth": { 00:18:15.769 "state": "completed", 00:18:15.769 "digest": "sha384", 00:18:15.769 "dhgroup": "ffdhe6144" 00:18:15.769 } 00:18:15.769 } 00:18:15.769 ]' 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.769 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.028 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:16.597 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.597 15:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:16.597 15:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.597 15:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.597 15:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.597 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:16.597 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:16.597 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:16.856 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:17.114 00:18:17.114 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:17.114 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:17.114 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:17.376 { 00:18:17.376 "cntlid": 83, 00:18:17.376 "qid": 0, 00:18:17.376 "state": "enabled", 00:18:17.376 "listen_address": { 00:18:17.376 "trtype": "TCP", 00:18:17.376 "adrfam": "IPv4", 00:18:17.376 "traddr": "10.0.0.2", 00:18:17.376 "trsvcid": "4420" 00:18:17.376 }, 00:18:17.376 "peer_address": { 00:18:17.376 "trtype": "TCP", 00:18:17.376 "adrfam": "IPv4", 00:18:17.376 "traddr": "10.0.0.1", 00:18:17.376 "trsvcid": "33136" 00:18:17.376 }, 00:18:17.376 "auth": { 00:18:17.376 "state": "completed", 00:18:17.376 "digest": "sha384", 00:18:17.376 "dhgroup": "ffdhe6144" 00:18:17.376 } 00:18:17.376 } 00:18:17.376 ]' 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.376 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.634 15:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.201 15:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.461 00:18:18.719 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:18.719 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:18.719 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.719 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.719 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.719 15:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.720 15:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.720 15:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.720 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:18.720 { 00:18:18.720 "cntlid": 85, 00:18:18.720 "qid": 0, 00:18:18.720 "state": "enabled", 00:18:18.720 "listen_address": { 00:18:18.720 "trtype": "TCP", 00:18:18.720 "adrfam": "IPv4", 00:18:18.720 "traddr": "10.0.0.2", 00:18:18.720 "trsvcid": "4420" 00:18:18.720 }, 00:18:18.720 "peer_address": { 00:18:18.720 "trtype": "TCP", 00:18:18.720 "adrfam": "IPv4", 00:18:18.720 "traddr": "10.0.0.1", 00:18:18.720 "trsvcid": "33152" 00:18:18.720 }, 00:18:18.720 "auth": { 00:18:18.720 "state": "completed", 00:18:18.720 "digest": "sha384", 00:18:18.720 "dhgroup": "ffdhe6144" 00:18:18.720 } 00:18:18.720 } 00:18:18.720 ]' 00:18:18.720 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.979 15:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:19.546 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.805 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.064 00:18:20.064 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:20.064 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.064 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:20.323 { 00:18:20.323 "cntlid": 87, 00:18:20.323 "qid": 0, 00:18:20.323 "state": "enabled", 00:18:20.323 "listen_address": { 00:18:20.323 "trtype": "TCP", 00:18:20.323 "adrfam": "IPv4", 00:18:20.323 "traddr": "10.0.0.2", 00:18:20.323 "trsvcid": "4420" 00:18:20.323 }, 00:18:20.323 "peer_address": { 00:18:20.323 "trtype": "TCP", 00:18:20.323 "adrfam": "IPv4", 00:18:20.323 "traddr": "10.0.0.1", 00:18:20.323 "trsvcid": "60352" 00:18:20.323 }, 00:18:20.323 "auth": { 00:18:20.323 "state": "completed", 00:18:20.323 "digest": "sha384", 00:18:20.323 "dhgroup": "ffdhe6144" 00:18:20.323 } 00:18:20.323 } 00:18:20.323 ]' 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.323 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:20.582 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.582 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.582 15:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.583 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:21.151 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:21.412 15:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:21.981 00:18:21.981 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:21.982 { 00:18:21.982 "cntlid": 89, 00:18:21.982 "qid": 0, 00:18:21.982 "state": "enabled", 00:18:21.982 "listen_address": { 00:18:21.982 "trtype": "TCP", 00:18:21.982 "adrfam": "IPv4", 00:18:21.982 "traddr": "10.0.0.2", 00:18:21.982 "trsvcid": "4420" 00:18:21.982 }, 00:18:21.982 "peer_address": { 00:18:21.982 "trtype": "TCP", 00:18:21.982 "adrfam": "IPv4", 00:18:21.982 "traddr": "10.0.0.1", 00:18:21.982 "trsvcid": "60384" 00:18:21.982 }, 00:18:21.982 "auth": { 00:18:21.982 "state": "completed", 00:18:21.982 "digest": "sha384", 00:18:21.982 "dhgroup": "ffdhe8192" 00:18:21.982 } 00:18:21.982 } 00:18:21.982 ]' 00:18:21.982 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.313 15:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.881 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:23.140 15:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:23.709 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:23.709 { 00:18:23.709 "cntlid": 91, 00:18:23.709 "qid": 0, 00:18:23.709 "state": "enabled", 00:18:23.709 "listen_address": { 00:18:23.709 "trtype": "TCP", 00:18:23.709 "adrfam": "IPv4", 00:18:23.709 "traddr": "10.0.0.2", 00:18:23.709 "trsvcid": "4420" 00:18:23.709 }, 00:18:23.709 "peer_address": { 00:18:23.709 "trtype": "TCP", 00:18:23.709 "adrfam": "IPv4", 00:18:23.709 "traddr": "10.0.0.1", 00:18:23.709 "trsvcid": "60412" 00:18:23.709 }, 00:18:23.709 "auth": { 00:18:23.709 "state": "completed", 00:18:23.709 "digest": "sha384", 00:18:23.709 "dhgroup": "ffdhe8192" 00:18:23.709 } 00:18:23.709 } 00:18:23.709 ]' 00:18:23.709 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:23.968 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.968 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:23.968 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.968 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:23.968 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.968 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.968 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.227 15:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:24.796 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:25.364 00:18:25.364 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:25.364 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:25.364 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:25.623 { 00:18:25.623 "cntlid": 93, 00:18:25.623 "qid": 0, 00:18:25.623 "state": "enabled", 00:18:25.623 "listen_address": { 00:18:25.623 "trtype": "TCP", 00:18:25.623 "adrfam": "IPv4", 00:18:25.623 "traddr": "10.0.0.2", 00:18:25.623 "trsvcid": "4420" 00:18:25.623 }, 00:18:25.623 "peer_address": { 00:18:25.623 "trtype": "TCP", 00:18:25.623 "adrfam": "IPv4", 00:18:25.623 "traddr": "10.0.0.1", 00:18:25.623 "trsvcid": "60428" 00:18:25.623 }, 00:18:25.623 "auth": { 00:18:25.623 "state": "completed", 00:18:25.623 "digest": "sha384", 00:18:25.623 "dhgroup": "ffdhe8192" 00:18:25.623 } 00:18:25.623 } 00:18:25.623 ]' 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.623 15:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:25.623 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.623 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:25.623 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.623 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.623 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.882 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.450 15:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.450 15:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.451 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.451 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.019 00:18:27.019 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:27.019 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:27.019 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:27.278 { 00:18:27.278 "cntlid": 95, 00:18:27.278 "qid": 0, 00:18:27.278 "state": "enabled", 00:18:27.278 "listen_address": { 00:18:27.278 "trtype": "TCP", 00:18:27.278 "adrfam": "IPv4", 00:18:27.278 "traddr": "10.0.0.2", 00:18:27.278 "trsvcid": "4420" 00:18:27.278 }, 00:18:27.278 "peer_address": { 00:18:27.278 "trtype": "TCP", 00:18:27.278 "adrfam": "IPv4", 00:18:27.278 "traddr": "10.0.0.1", 00:18:27.278 "trsvcid": "60462" 00:18:27.278 }, 00:18:27.278 "auth": { 00:18:27.278 "state": "completed", 00:18:27.278 "digest": "sha384", 00:18:27.278 "dhgroup": "ffdhe8192" 00:18:27.278 } 00:18:27.278 } 00:18:27.278 ]' 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.278 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.538 15:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:28.107 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.367 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.367 00:18:28.625 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:28.625 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:28.625 15:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.625 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.625 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.625 15:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.625 15:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.625 15:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.625 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:28.625 { 00:18:28.625 "cntlid": 97, 00:18:28.626 "qid": 0, 00:18:28.626 "state": "enabled", 00:18:28.626 "listen_address": { 00:18:28.626 "trtype": "TCP", 00:18:28.626 "adrfam": "IPv4", 00:18:28.626 "traddr": "10.0.0.2", 00:18:28.626 "trsvcid": "4420" 00:18:28.626 }, 00:18:28.626 "peer_address": { 00:18:28.626 "trtype": "TCP", 00:18:28.626 "adrfam": "IPv4", 00:18:28.626 "traddr": "10.0.0.1", 00:18:28.626 "trsvcid": "60496" 00:18:28.626 }, 00:18:28.626 "auth": { 00:18:28.626 "state": "completed", 00:18:28.626 "digest": "sha512", 00:18:28.626 "dhgroup": "null" 00:18:28.626 } 00:18:28.626 } 00:18:28.626 ]' 00:18:28.626 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:28.626 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.626 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:28.885 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:28.885 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:28.885 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.885 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.885 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.885 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:29.453 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.453 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:29.453 15:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.454 15:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.454 15:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.454 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:29.454 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.454 15:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:29.713 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:29.973 00:18:29.973 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:29.973 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:29.973 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:30.232 { 00:18:30.232 "cntlid": 99, 00:18:30.232 "qid": 0, 00:18:30.232 "state": "enabled", 00:18:30.232 "listen_address": { 00:18:30.232 "trtype": "TCP", 00:18:30.232 "adrfam": "IPv4", 00:18:30.232 "traddr": "10.0.0.2", 00:18:30.232 "trsvcid": "4420" 00:18:30.232 }, 00:18:30.232 "peer_address": { 00:18:30.232 "trtype": "TCP", 00:18:30.232 "adrfam": "IPv4", 00:18:30.232 "traddr": "10.0.0.1", 00:18:30.232 "trsvcid": "37080" 00:18:30.232 }, 00:18:30.232 "auth": { 00:18:30.232 "state": "completed", 00:18:30.232 "digest": "sha512", 00:18:30.232 "dhgroup": "null" 00:18:30.232 } 00:18:30.232 } 00:18:30.232 ]' 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.232 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.491 15:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.061 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.320 00:18:31.320 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:31.320 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:31.320 15:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:31.579 { 00:18:31.579 "cntlid": 101, 00:18:31.579 "qid": 0, 00:18:31.579 "state": "enabled", 00:18:31.579 "listen_address": { 00:18:31.579 "trtype": "TCP", 00:18:31.579 "adrfam": "IPv4", 00:18:31.579 "traddr": "10.0.0.2", 00:18:31.579 "trsvcid": "4420" 00:18:31.579 }, 00:18:31.579 "peer_address": { 00:18:31.579 "trtype": "TCP", 00:18:31.579 "adrfam": "IPv4", 00:18:31.579 "traddr": "10.0.0.1", 00:18:31.579 "trsvcid": "37106" 00:18:31.579 }, 00:18:31.579 "auth": { 00:18:31.579 "state": "completed", 00:18:31.579 "digest": "sha512", 00:18:31.579 "dhgroup": "null" 00:18:31.579 } 00:18:31.579 } 00:18:31.579 ]' 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:31.579 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:31.838 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.838 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.838 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.838 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:32.407 15:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.667 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.926 00:18:32.926 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:32.926 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:32.926 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.185 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.185 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.185 15:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:33.186 { 00:18:33.186 "cntlid": 103, 00:18:33.186 "qid": 0, 00:18:33.186 "state": "enabled", 00:18:33.186 "listen_address": { 00:18:33.186 "trtype": "TCP", 00:18:33.186 "adrfam": "IPv4", 00:18:33.186 "traddr": "10.0.0.2", 00:18:33.186 "trsvcid": "4420" 00:18:33.186 }, 00:18:33.186 "peer_address": { 00:18:33.186 "trtype": "TCP", 00:18:33.186 "adrfam": "IPv4", 00:18:33.186 "traddr": "10.0.0.1", 00:18:33.186 "trsvcid": "37134" 00:18:33.186 }, 00:18:33.186 "auth": { 00:18:33.186 "state": "completed", 00:18:33.186 "digest": "sha512", 00:18:33.186 "dhgroup": "null" 00:18:33.186 } 00:18:33.186 } 00:18:33.186 ]' 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.186 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.445 15:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:34.014 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:34.274 00:18:34.274 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:34.274 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:34.274 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.533 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.533 15:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.533 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.533 15:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 15:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.533 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:34.533 { 00:18:34.533 "cntlid": 105, 00:18:34.533 "qid": 0, 00:18:34.533 "state": "enabled", 00:18:34.533 "listen_address": { 00:18:34.533 "trtype": "TCP", 00:18:34.533 "adrfam": "IPv4", 00:18:34.533 "traddr": "10.0.0.2", 00:18:34.533 "trsvcid": "4420" 00:18:34.533 }, 00:18:34.533 "peer_address": { 00:18:34.533 "trtype": "TCP", 00:18:34.533 "adrfam": "IPv4", 00:18:34.533 "traddr": "10.0.0.1", 00:18:34.533 "trsvcid": "37164" 00:18:34.533 }, 00:18:34.533 "auth": { 00:18:34.533 "state": "completed", 00:18:34.533 "digest": "sha512", 00:18:34.533 "dhgroup": "ffdhe2048" 00:18:34.533 } 00:18:34.533 } 00:18:34.533 ]' 00:18:34.533 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:34.533 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.533 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:34.533 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:34.792 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:34.792 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.792 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.792 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.792 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.411 15:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:35.670 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:35.929 00:18:35.929 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:35.929 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:35.929 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.929 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.929 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.929 15:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.929 15:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:36.189 { 00:18:36.189 "cntlid": 107, 00:18:36.189 "qid": 0, 00:18:36.189 "state": "enabled", 00:18:36.189 "listen_address": { 00:18:36.189 "trtype": "TCP", 00:18:36.189 "adrfam": "IPv4", 00:18:36.189 "traddr": "10.0.0.2", 00:18:36.189 "trsvcid": "4420" 00:18:36.189 }, 00:18:36.189 "peer_address": { 00:18:36.189 "trtype": "TCP", 00:18:36.189 "adrfam": "IPv4", 00:18:36.189 "traddr": "10.0.0.1", 00:18:36.189 "trsvcid": "37186" 00:18:36.189 }, 00:18:36.189 "auth": { 00:18:36.189 "state": "completed", 00:18:36.189 "digest": "sha512", 00:18:36.189 "dhgroup": "ffdhe2048" 00:18:36.189 } 00:18:36.189 } 00:18:36.189 ]' 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.189 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.453 15:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:37.022 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.022 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:37.022 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.022 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:37.023 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:37.282 00:18:37.282 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:37.282 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:37.282 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.540 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.540 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.540 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.540 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.540 15:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.540 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:37.540 { 00:18:37.540 "cntlid": 109, 00:18:37.540 "qid": 0, 00:18:37.540 "state": "enabled", 00:18:37.540 "listen_address": { 00:18:37.540 "trtype": "TCP", 00:18:37.540 "adrfam": "IPv4", 00:18:37.540 "traddr": "10.0.0.2", 00:18:37.540 "trsvcid": "4420" 00:18:37.540 }, 00:18:37.540 "peer_address": { 00:18:37.540 "trtype": "TCP", 00:18:37.540 "adrfam": "IPv4", 00:18:37.540 "traddr": "10.0.0.1", 00:18:37.540 "trsvcid": "37210" 00:18:37.540 }, 00:18:37.540 "auth": { 00:18:37.540 "state": "completed", 00:18:37.540 "digest": "sha512", 00:18:37.540 "dhgroup": "ffdhe2048" 00:18:37.540 } 00:18:37.540 } 00:18:37.540 ]' 00:18:37.540 15:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:37.540 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.540 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:37.540 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.540 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:37.541 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.541 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.541 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.800 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:38.368 15:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.626 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.884 00:18:38.884 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:38.884 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.884 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:38.884 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.884 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.884 15:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.884 15:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:39.142 { 00:18:39.142 "cntlid": 111, 00:18:39.142 "qid": 0, 00:18:39.142 "state": "enabled", 00:18:39.142 "listen_address": { 00:18:39.142 "trtype": "TCP", 00:18:39.142 "adrfam": "IPv4", 00:18:39.142 "traddr": "10.0.0.2", 00:18:39.142 "trsvcid": "4420" 00:18:39.142 }, 00:18:39.142 "peer_address": { 00:18:39.142 "trtype": "TCP", 00:18:39.142 "adrfam": "IPv4", 00:18:39.142 "traddr": "10.0.0.1", 00:18:39.142 "trsvcid": "37246" 00:18:39.142 }, 00:18:39.142 "auth": { 00:18:39.142 "state": "completed", 00:18:39.142 "digest": "sha512", 00:18:39.142 "dhgroup": "ffdhe2048" 00:18:39.142 } 00:18:39.142 } 00:18:39.142 ]' 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.142 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.400 15:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:39.968 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:40.228 00:18:40.228 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:40.228 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:40.228 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:40.488 { 00:18:40.488 "cntlid": 113, 00:18:40.488 "qid": 0, 00:18:40.488 "state": "enabled", 00:18:40.488 "listen_address": { 00:18:40.488 "trtype": "TCP", 00:18:40.488 "adrfam": "IPv4", 00:18:40.488 "traddr": "10.0.0.2", 00:18:40.488 "trsvcid": "4420" 00:18:40.488 }, 00:18:40.488 "peer_address": { 00:18:40.488 "trtype": "TCP", 00:18:40.488 "adrfam": "IPv4", 00:18:40.488 "traddr": "10.0.0.1", 00:18:40.488 "trsvcid": "47242" 00:18:40.488 }, 00:18:40.488 "auth": { 00:18:40.488 "state": "completed", 00:18:40.488 "digest": "sha512", 00:18:40.488 "dhgroup": "ffdhe3072" 00:18:40.488 } 00:18:40.488 } 00:18:40.488 ]' 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.488 15:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:40.488 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.488 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:40.488 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.488 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.488 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.748 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.318 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:41.577 15:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:41.837 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:41.837 { 00:18:41.837 "cntlid": 115, 00:18:41.837 "qid": 0, 00:18:41.837 "state": "enabled", 00:18:41.837 "listen_address": { 00:18:41.837 "trtype": "TCP", 00:18:41.837 "adrfam": "IPv4", 00:18:41.837 "traddr": "10.0.0.2", 00:18:41.837 "trsvcid": "4420" 00:18:41.837 }, 00:18:41.837 "peer_address": { 00:18:41.837 "trtype": "TCP", 00:18:41.837 "adrfam": "IPv4", 00:18:41.837 "traddr": "10.0.0.1", 00:18:41.837 "trsvcid": "47280" 00:18:41.837 }, 00:18:41.837 "auth": { 00:18:41.837 "state": "completed", 00:18:41.837 "digest": "sha512", 00:18:41.837 "dhgroup": "ffdhe3072" 00:18:41.837 } 00:18:41.837 } 00:18:41.837 ]' 00:18:41.837 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:42.096 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.096 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:42.096 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.096 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:42.096 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.096 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.096 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.355 15:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:42.923 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:43.181 00:18:43.181 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.181 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:43.181 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:43.441 { 00:18:43.441 "cntlid": 117, 00:18:43.441 "qid": 0, 00:18:43.441 "state": "enabled", 00:18:43.441 "listen_address": { 00:18:43.441 "trtype": "TCP", 00:18:43.441 "adrfam": "IPv4", 00:18:43.441 "traddr": "10.0.0.2", 00:18:43.441 "trsvcid": "4420" 00:18:43.441 }, 00:18:43.441 "peer_address": { 00:18:43.441 "trtype": "TCP", 00:18:43.441 "adrfam": "IPv4", 00:18:43.441 "traddr": "10.0.0.1", 00:18:43.441 "trsvcid": "47322" 00:18:43.441 }, 00:18:43.441 "auth": { 00:18:43.441 "state": "completed", 00:18:43.441 "digest": "sha512", 00:18:43.441 "dhgroup": "ffdhe3072" 00:18:43.441 } 00:18:43.441 } 00:18:43.441 ]' 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.441 15:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:43.701 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.701 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.701 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.701 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.269 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.528 15:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.788 00:18:44.788 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:44.788 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:44.788 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.788 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.788 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.788 15:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.788 15:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.054 15:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:45.055 { 00:18:45.055 "cntlid": 119, 00:18:45.055 "qid": 0, 00:18:45.055 "state": "enabled", 00:18:45.055 "listen_address": { 00:18:45.055 "trtype": "TCP", 00:18:45.055 "adrfam": "IPv4", 00:18:45.055 "traddr": "10.0.0.2", 00:18:45.055 "trsvcid": "4420" 00:18:45.055 }, 00:18:45.055 "peer_address": { 00:18:45.055 "trtype": "TCP", 00:18:45.055 "adrfam": "IPv4", 00:18:45.055 "traddr": "10.0.0.1", 00:18:45.055 "trsvcid": "47354" 00:18:45.055 }, 00:18:45.055 "auth": { 00:18:45.055 "state": "completed", 00:18:45.055 "digest": "sha512", 00:18:45.055 "dhgroup": "ffdhe3072" 00:18:45.055 } 00:18:45.055 } 00:18:45.055 ]' 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.055 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.317 15:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:45.886 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:46.144 00:18:46.144 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:46.144 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:46.144 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:46.404 { 00:18:46.404 "cntlid": 121, 00:18:46.404 "qid": 0, 00:18:46.404 "state": "enabled", 00:18:46.404 "listen_address": { 00:18:46.404 "trtype": "TCP", 00:18:46.404 "adrfam": "IPv4", 00:18:46.404 "traddr": "10.0.0.2", 00:18:46.404 "trsvcid": "4420" 00:18:46.404 }, 00:18:46.404 "peer_address": { 00:18:46.404 "trtype": "TCP", 00:18:46.404 "adrfam": "IPv4", 00:18:46.404 "traddr": "10.0.0.1", 00:18:46.404 "trsvcid": "47380" 00:18:46.404 }, 00:18:46.404 "auth": { 00:18:46.404 "state": "completed", 00:18:46.404 "digest": "sha512", 00:18:46.404 "dhgroup": "ffdhe4096" 00:18:46.404 } 00:18:46.404 } 00:18:46.404 ]' 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.404 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:46.663 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.663 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.663 15:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.663 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.231 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:47.489 15:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:47.747 00:18:47.747 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:47.747 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.747 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:48.006 { 00:18:48.006 "cntlid": 123, 00:18:48.006 "qid": 0, 00:18:48.006 "state": "enabled", 00:18:48.006 "listen_address": { 00:18:48.006 "trtype": "TCP", 00:18:48.006 "adrfam": "IPv4", 00:18:48.006 "traddr": "10.0.0.2", 00:18:48.006 "trsvcid": "4420" 00:18:48.006 }, 00:18:48.006 "peer_address": { 00:18:48.006 "trtype": "TCP", 00:18:48.006 "adrfam": "IPv4", 00:18:48.006 "traddr": "10.0.0.1", 00:18:48.006 "trsvcid": "47412" 00:18:48.006 }, 00:18:48.006 "auth": { 00:18:48.006 "state": "completed", 00:18:48.006 "digest": "sha512", 00:18:48.006 "dhgroup": "ffdhe4096" 00:18:48.006 } 00:18:48.006 } 00:18:48.006 ]' 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.006 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.297 15:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:48.866 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:49.125 00:18:49.125 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.125 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.125 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.385 { 00:18:49.385 "cntlid": 125, 00:18:49.385 "qid": 0, 00:18:49.385 "state": "enabled", 00:18:49.385 "listen_address": { 00:18:49.385 "trtype": "TCP", 00:18:49.385 "adrfam": "IPv4", 00:18:49.385 "traddr": "10.0.0.2", 00:18:49.385 "trsvcid": "4420" 00:18:49.385 }, 00:18:49.385 "peer_address": { 00:18:49.385 "trtype": "TCP", 00:18:49.385 "adrfam": "IPv4", 00:18:49.385 "traddr": "10.0.0.1", 00:18:49.385 "trsvcid": "47438" 00:18:49.385 }, 00:18:49.385 "auth": { 00:18:49.385 "state": "completed", 00:18:49.385 "digest": "sha512", 00:18:49.385 "dhgroup": "ffdhe4096" 00:18:49.385 } 00:18:49.385 } 00:18:49.385 ]' 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.385 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.644 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.644 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.644 15:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.644 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.211 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.470 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.471 15:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.729 00:18:50.729 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:50.729 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.729 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:50.988 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.988 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.988 15:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.988 15:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 15:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.988 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:50.988 { 00:18:50.988 "cntlid": 127, 00:18:50.988 "qid": 0, 00:18:50.988 "state": "enabled", 00:18:50.988 "listen_address": { 00:18:50.988 "trtype": "TCP", 00:18:50.988 "adrfam": "IPv4", 00:18:50.988 "traddr": "10.0.0.2", 00:18:50.988 "trsvcid": "4420" 00:18:50.988 }, 00:18:50.988 "peer_address": { 00:18:50.988 "trtype": "TCP", 00:18:50.988 "adrfam": "IPv4", 00:18:50.988 "traddr": "10.0.0.1", 00:18:50.988 "trsvcid": "37788" 00:18:50.988 }, 00:18:50.989 "auth": { 00:18:50.989 "state": "completed", 00:18:50.989 "digest": "sha512", 00:18:50.989 "dhgroup": "ffdhe4096" 00:18:50.989 } 00:18:50.989 } 00:18:50.989 ]' 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.989 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.247 15:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.813 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.072 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:18:52.072 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:52.072 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.072 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.072 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.073 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:52.073 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.073 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.073 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.073 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:52.073 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:52.331 00:18:52.331 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:52.331 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:52.331 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.590 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.590 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.590 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.590 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.590 15:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.590 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:52.590 { 00:18:52.590 "cntlid": 129, 00:18:52.590 "qid": 0, 00:18:52.590 "state": "enabled", 00:18:52.590 "listen_address": { 00:18:52.590 "trtype": "TCP", 00:18:52.590 "adrfam": "IPv4", 00:18:52.590 "traddr": "10.0.0.2", 00:18:52.590 "trsvcid": "4420" 00:18:52.590 }, 00:18:52.590 "peer_address": { 00:18:52.590 "trtype": "TCP", 00:18:52.590 "adrfam": "IPv4", 00:18:52.590 "traddr": "10.0.0.1", 00:18:52.590 "trsvcid": "37814" 00:18:52.590 }, 00:18:52.590 "auth": { 00:18:52.590 "state": "completed", 00:18:52.590 "digest": "sha512", 00:18:52.590 "dhgroup": "ffdhe6144" 00:18:52.590 } 00:18:52.590 } 00:18:52.590 ]' 00:18:52.590 15:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:52.590 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.590 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:52.590 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.590 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:52.590 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.590 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.590 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.849 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.417 15:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:53.677 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:53.936 00:18:53.936 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.936 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.936 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:54.195 { 00:18:54.195 "cntlid": 131, 00:18:54.195 "qid": 0, 00:18:54.195 "state": "enabled", 00:18:54.195 "listen_address": { 00:18:54.195 "trtype": "TCP", 00:18:54.195 "adrfam": "IPv4", 00:18:54.195 "traddr": "10.0.0.2", 00:18:54.195 "trsvcid": "4420" 00:18:54.195 }, 00:18:54.195 "peer_address": { 00:18:54.195 "trtype": "TCP", 00:18:54.195 "adrfam": "IPv4", 00:18:54.195 "traddr": "10.0.0.1", 00:18:54.195 "trsvcid": "37850" 00:18:54.195 }, 00:18:54.195 "auth": { 00:18:54.195 "state": "completed", 00:18:54.195 "digest": "sha512", 00:18:54.195 "dhgroup": "ffdhe6144" 00:18:54.195 } 00:18:54.195 } 00:18:54.195 ]' 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.195 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.454 15:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:55.020 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:55.587 00:18:55.587 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.587 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.587 15:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.587 { 00:18:55.587 "cntlid": 133, 00:18:55.587 "qid": 0, 00:18:55.587 "state": "enabled", 00:18:55.587 "listen_address": { 00:18:55.587 "trtype": "TCP", 00:18:55.587 "adrfam": "IPv4", 00:18:55.587 "traddr": "10.0.0.2", 00:18:55.587 "trsvcid": "4420" 00:18:55.587 }, 00:18:55.587 "peer_address": { 00:18:55.587 "trtype": "TCP", 00:18:55.587 "adrfam": "IPv4", 00:18:55.587 "traddr": "10.0.0.1", 00:18:55.587 "trsvcid": "37866" 00:18:55.587 }, 00:18:55.587 "auth": { 00:18:55.587 "state": "completed", 00:18:55.587 "digest": "sha512", 00:18:55.587 "dhgroup": "ffdhe6144" 00:18:55.587 } 00:18:55.587 } 00:18:55.587 ]' 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.587 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.847 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.847 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.847 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.847 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.847 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.847 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.415 15:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.674 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.933 00:18:56.933 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:56.933 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:56.933 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:57.193 { 00:18:57.193 "cntlid": 135, 00:18:57.193 "qid": 0, 00:18:57.193 "state": "enabled", 00:18:57.193 "listen_address": { 00:18:57.193 "trtype": "TCP", 00:18:57.193 "adrfam": "IPv4", 00:18:57.193 "traddr": "10.0.0.2", 00:18:57.193 "trsvcid": "4420" 00:18:57.193 }, 00:18:57.193 "peer_address": { 00:18:57.193 "trtype": "TCP", 00:18:57.193 "adrfam": "IPv4", 00:18:57.193 "traddr": "10.0.0.1", 00:18:57.193 "trsvcid": "37894" 00:18:57.193 }, 00:18:57.193 "auth": { 00:18:57.193 "state": "completed", 00:18:57.193 "digest": "sha512", 00:18:57.193 "dhgroup": "ffdhe6144" 00:18:57.193 } 00:18:57.193 } 00:18:57.193 ]' 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.193 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:57.452 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.452 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.452 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.452 15:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.021 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:58.281 15:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:58.847 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.847 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:58.847 { 00:18:58.848 "cntlid": 137, 00:18:58.848 "qid": 0, 00:18:58.848 "state": "enabled", 00:18:58.848 "listen_address": { 00:18:58.848 "trtype": "TCP", 00:18:58.848 "adrfam": "IPv4", 00:18:58.848 "traddr": "10.0.0.2", 00:18:58.848 "trsvcid": "4420" 00:18:58.848 }, 00:18:58.848 "peer_address": { 00:18:58.848 "trtype": "TCP", 00:18:58.848 "adrfam": "IPv4", 00:18:58.848 "traddr": "10.0.0.1", 00:18:58.848 "trsvcid": "37922" 00:18:58.848 }, 00:18:58.848 "auth": { 00:18:58.848 "state": "completed", 00:18:58.848 "digest": "sha512", 00:18:58.848 "dhgroup": "ffdhe8192" 00:18:58.848 } 00:18:58.848 } 00:18:58.848 ]' 00:18:58.848 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:58.848 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.848 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:59.106 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.106 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:59.106 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.106 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.106 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.365 15:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:59.935 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:00.504 00:19:00.504 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:00.504 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:00.504 15:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.504 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:00.764 { 00:19:00.764 "cntlid": 139, 00:19:00.764 "qid": 0, 00:19:00.764 "state": "enabled", 00:19:00.764 "listen_address": { 00:19:00.764 "trtype": "TCP", 00:19:00.764 "adrfam": "IPv4", 00:19:00.764 "traddr": "10.0.0.2", 00:19:00.764 "trsvcid": "4420" 00:19:00.764 }, 00:19:00.764 "peer_address": { 00:19:00.764 "trtype": "TCP", 00:19:00.764 "adrfam": "IPv4", 00:19:00.764 "traddr": "10.0.0.1", 00:19:00.764 "trsvcid": "60562" 00:19:00.764 }, 00:19:00.764 "auth": { 00:19:00.764 "state": "completed", 00:19:00.764 "digest": "sha512", 00:19:00.764 "dhgroup": "ffdhe8192" 00:19:00.764 } 00:19:00.764 } 00:19:00.764 ]' 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.764 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.023 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGI4MWNhNmZhMDUyYWYwM2JhN2NjNTA5NTI3ODA0MDbR1ddT: 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.604 15:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:01.604 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:02.173 00:19:02.173 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:02.173 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:02.173 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.433 { 00:19:02.433 "cntlid": 141, 00:19:02.433 "qid": 0, 00:19:02.433 "state": "enabled", 00:19:02.433 "listen_address": { 00:19:02.433 "trtype": "TCP", 00:19:02.433 "adrfam": "IPv4", 00:19:02.433 "traddr": "10.0.0.2", 00:19:02.433 "trsvcid": "4420" 00:19:02.433 }, 00:19:02.433 "peer_address": { 00:19:02.433 "trtype": "TCP", 00:19:02.433 "adrfam": "IPv4", 00:19:02.433 "traddr": "10.0.0.1", 00:19:02.433 "trsvcid": "60584" 00:19:02.433 }, 00:19:02.433 "auth": { 00:19:02.433 "state": "completed", 00:19:02.433 "digest": "sha512", 00:19:02.433 "dhgroup": "ffdhe8192" 00:19:02.433 } 00:19:02.433 } 00:19:02.433 ]' 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.433 15:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.692 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ZDRkNGU2MzFkMzg0MzgzNTQ2YzdmZjg5ZDYxZTA2MjM0NDBmMzdhZTE4OWIxZjE2Vdua6w==: 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.262 15:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.522 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.522 15:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.781 00:19:03.781 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:03.781 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:03.781 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:04.040 { 00:19:04.040 "cntlid": 143, 00:19:04.040 "qid": 0, 00:19:04.040 "state": "enabled", 00:19:04.040 "listen_address": { 00:19:04.040 "trtype": "TCP", 00:19:04.040 "adrfam": "IPv4", 00:19:04.040 "traddr": "10.0.0.2", 00:19:04.040 "trsvcid": "4420" 00:19:04.040 }, 00:19:04.040 "peer_address": { 00:19:04.040 "trtype": "TCP", 00:19:04.040 "adrfam": "IPv4", 00:19:04.040 "traddr": "10.0.0.1", 00:19:04.040 "trsvcid": "60608" 00:19:04.040 }, 00:19:04.040 "auth": { 00:19:04.040 "state": "completed", 00:19:04.040 "digest": "sha512", 00:19:04.040 "dhgroup": "ffdhe8192" 00:19:04.040 } 00:19:04.040 } 00:19:04.040 ]' 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.040 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:04.299 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.300 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.300 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.300 15:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MWMwNTAxNGM3NjBmNzQwM2IwMzA5NTE5ZmQ0YzgyYmZlMWQ4ZTllYmY1Y2U4ZThjYmYwZmVhZDJlMmE2NmJmYhtWU8Q=: 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.869 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:05.127 15:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:05.695 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:05.695 { 00:19:05.695 "cntlid": 145, 00:19:05.695 "qid": 0, 00:19:05.695 "state": "enabled", 00:19:05.695 "listen_address": { 00:19:05.695 "trtype": "TCP", 00:19:05.695 "adrfam": "IPv4", 00:19:05.695 "traddr": "10.0.0.2", 00:19:05.695 "trsvcid": "4420" 00:19:05.695 }, 00:19:05.695 "peer_address": { 00:19:05.695 "trtype": "TCP", 00:19:05.695 "adrfam": "IPv4", 00:19:05.695 "traddr": "10.0.0.1", 00:19:05.695 "trsvcid": "60642" 00:19:05.695 }, 00:19:05.695 "auth": { 00:19:05.695 "state": "completed", 00:19:05.695 "digest": "sha512", 00:19:05.695 "dhgroup": "ffdhe8192" 00:19:05.695 } 00:19:05.695 } 00:19:05.695 ]' 00:19:05.695 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:05.696 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.696 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.955 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.955 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.955 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.955 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.955 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.215 15:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NzZiOTlkNjYwZGYwNDJiYWI0YTIyMDE1MzJlNDA1OWMyNWI5ZDAxNTM0NDA0MWYy2i6Xrw==: 00:19:06.474 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.474 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:06.474 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.474 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.734 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.995 request: 00:19:06.995 { 00:19:06.995 "name": "nvme0", 00:19:06.995 "trtype": "tcp", 00:19:06.995 "traddr": "10.0.0.2", 00:19:06.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:06.995 "adrfam": "ipv4", 00:19:06.995 "trsvcid": "4420", 00:19:06.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:06.995 "dhchap_key": "key2", 00:19:06.995 "method": "bdev_nvme_attach_controller", 00:19:06.995 "req_id": 1 00:19:06.995 } 00:19:06.995 Got JSON-RPC error response 00:19:06.995 response: 00:19:06.995 { 00:19:06.995 "code": -32602, 00:19:06.995 "message": "Invalid parameters" 00:19:06.995 } 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3753364 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3753364 ']' 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3753364 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3753364 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3753364' 00:19:06.995 killing process with pid 3753364 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3753364 00:19:06.995 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3753364 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.564 rmmod nvme_tcp 00:19:07.564 rmmod nvme_fabrics 00:19:07.564 rmmod nvme_keyring 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3753199 ']' 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3753199 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3753199 ']' 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3753199 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3753199 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3753199' 00:19:07.564 killing process with pid 3753199 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3753199 00:19:07.564 15:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3753199 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.824 15:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.732 15:57:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:09.732 15:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.T3P /tmp/spdk.key-sha256.jXN /tmp/spdk.key-sha384.gHf /tmp/spdk.key-sha512.LLh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:09.732 00:19:09.732 real 2m3.656s 00:19:09.732 user 4m34.062s 00:19:09.732 sys 0m27.717s 00:19:09.732 15:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:09.732 15:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.732 ************************************ 00:19:09.732 END TEST nvmf_auth_target 00:19:09.732 ************************************ 00:19:09.992 15:57:08 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:09.992 15:57:08 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:09.992 15:57:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:09.992 15:57:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:09.992 15:57:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:09.992 ************************************ 00:19:09.992 START TEST nvmf_bdevio_no_huge 00:19:09.992 ************************************ 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:09.992 * Looking for test storage... 00:19:09.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:09.992 15:57:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:16.568 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:16.568 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:16.568 Found net devices under 0000:af:00.0: cvl_0_0 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.568 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:16.569 Found net devices under 0000:af:00.1: cvl_0_1 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:19:16.569 00:19:16.569 --- 10.0.0.2 ping statistics --- 00:19:16.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.569 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:16.569 15:57:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:19:16.569 00:19:16.569 --- 10.0.0.1 ping statistics --- 00:19:16.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.569 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3777953 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3777953 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3777953 ']' 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.569 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:16.569 [2024-05-15 15:57:15.082805] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:16.569 [2024-05-15 15:57:15.082857] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:16.828 [2024-05-15 15:57:15.164465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.828 [2024-05-15 15:57:15.264240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.828 [2024-05-15 15:57:15.264276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.828 [2024-05-15 15:57:15.264286] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.828 [2024-05-15 15:57:15.264294] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.828 [2024-05-15 15:57:15.264318] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.828 [2024-05-15 15:57:15.264448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:16.828 [2024-05-15 15:57:15.264559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:16.828 [2024-05-15 15:57:15.264668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.828 [2024-05-15 15:57:15.264670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.397 [2024-05-15 15:57:15.941087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.397 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.657 Malloc0 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.657 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.657 [2024-05-15 15:57:15.985883] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:17.658 [2024-05-15 15:57:15.986134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.658 { 00:19:17.658 "params": { 00:19:17.658 "name": "Nvme$subsystem", 00:19:17.658 "trtype": "$TEST_TRANSPORT", 00:19:17.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.658 "adrfam": "ipv4", 00:19:17.658 "trsvcid": "$NVMF_PORT", 00:19:17.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.658 "hdgst": ${hdgst:-false}, 00:19:17.658 "ddgst": ${ddgst:-false} 00:19:17.658 }, 00:19:17.658 "method": "bdev_nvme_attach_controller" 00:19:17.658 } 00:19:17.658 EOF 00:19:17.658 )") 00:19:17.658 15:57:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:17.658 15:57:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:17.658 15:57:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:17.658 15:57:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:17.658 "params": { 00:19:17.658 "name": "Nvme1", 00:19:17.658 "trtype": "tcp", 00:19:17.658 "traddr": "10.0.0.2", 00:19:17.658 "adrfam": "ipv4", 00:19:17.658 "trsvcid": "4420", 00:19:17.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.658 "hdgst": false, 00:19:17.658 "ddgst": false 00:19:17.658 }, 00:19:17.658 "method": "bdev_nvme_attach_controller" 00:19:17.658 }' 00:19:17.658 [2024-05-15 15:57:16.035821] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:17.658 [2024-05-15 15:57:16.035871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3778022 ] 00:19:17.658 [2024-05-15 15:57:16.111069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:17.658 [2024-05-15 15:57:16.212028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.658 [2024-05-15 15:57:16.212126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.658 [2024-05-15 15:57:16.212128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.226 I/O targets: 00:19:18.226 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:18.226 00:19:18.226 00:19:18.226 CUnit - A unit testing framework for C - Version 2.1-3 00:19:18.226 http://cunit.sourceforge.net/ 00:19:18.226 00:19:18.226 00:19:18.226 Suite: bdevio tests on: Nvme1n1 00:19:18.226 Test: blockdev write read block ...passed 00:19:18.226 Test: blockdev write zeroes read block ...passed 00:19:18.226 Test: blockdev write zeroes read no split ...passed 00:19:18.226 Test: blockdev write zeroes read split ...passed 00:19:18.227 Test: blockdev write zeroes read split partial ...passed 00:19:18.227 Test: blockdev reset ...[2024-05-15 15:57:16.647171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.227 [2024-05-15 15:57:16.647234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9a910 (9): Bad file descriptor 00:19:18.227 [2024-05-15 15:57:16.664005] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:18.227 passed 00:19:18.227 Test: blockdev write read 8 blocks ...passed 00:19:18.227 Test: blockdev write read size > 128k ...passed 00:19:18.227 Test: blockdev write read invalid size ...passed 00:19:18.227 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:18.227 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:18.227 Test: blockdev write read max offset ...passed 00:19:18.524 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:18.524 Test: blockdev writev readv 8 blocks ...passed 00:19:18.524 Test: blockdev writev readv 30 x 1block ...passed 00:19:18.524 Test: blockdev writev readv block ...passed 00:19:18.524 Test: blockdev writev readv size > 128k ...passed 00:19:18.524 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:18.524 Test: blockdev comparev and writev ...[2024-05-15 15:57:16.896288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.896317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.896333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.896344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.896825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.896838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.896852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.896862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.897301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.897314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.897337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.897825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.897837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.897850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.524 [2024-05-15 15:57:16.897860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.524 passed 00:19:18.524 Test: blockdev nvme passthru rw ...passed 00:19:18.524 Test: blockdev nvme passthru vendor specific ...[2024-05-15 15:57:16.982043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.524 [2024-05-15 15:57:16.982060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.982394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.524 [2024-05-15 15:57:16.982408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.982730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.524 [2024-05-15 15:57:16.982742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.524 [2024-05-15 15:57:16.983069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.524 [2024-05-15 15:57:16.983081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.524 passed 00:19:18.524 Test: blockdev nvme admin passthru ...passed 00:19:18.524 Test: blockdev copy ...passed 00:19:18.524 00:19:18.524 Run Summary: Type Total Ran Passed Failed Inactive 00:19:18.524 suites 1 1 n/a 0 0 00:19:18.524 tests 23 23 23 0 0 00:19:18.524 asserts 152 152 152 0 n/a 00:19:18.524 00:19:18.524 Elapsed time = 1.107 seconds 00:19:18.812 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.812 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.812 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.072 rmmod nvme_tcp 00:19:19.072 rmmod nvme_fabrics 00:19:19.072 rmmod nvme_keyring 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3777953 ']' 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3777953 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3777953 ']' 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3777953 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3777953 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3777953' 00:19:19.072 killing process with pid 3777953 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3777953 00:19:19.072 [2024-05-15 15:57:17.487724] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:19.072 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3777953 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.333 15:57:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.871 15:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.871 00:19:21.871 real 0m11.571s 00:19:21.871 user 0m14.113s 00:19:21.871 sys 0m6.099s 00:19:21.871 15:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:21.871 15:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.871 ************************************ 00:19:21.871 END TEST nvmf_bdevio_no_huge 00:19:21.871 ************************************ 00:19:21.871 15:57:19 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:21.871 15:57:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:21.871 15:57:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:21.871 15:57:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:21.871 ************************************ 00:19:21.871 START TEST nvmf_tls 00:19:21.871 ************************************ 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:21.871 * Looking for test storage... 00:19:21.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.871 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.872 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.872 15:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.872 15:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.872 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:21.872 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:21.872 15:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.872 15:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:28.441 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:28.441 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:28.441 Found net devices under 0000:af:00.0: cvl_0_0 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:28.441 Found net devices under 0000:af:00.1: cvl_0_1 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:28.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:19:28.441 00:19:28.441 --- 10.0.0.2 ping statistics --- 00:19:28.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.441 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:19:28.441 00:19:28.441 --- 10.0.0.1 ping statistics --- 00:19:28.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.441 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:19:28.441 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3781988 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3781988 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3781988 ']' 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:28.442 15:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.442 [2024-05-15 15:57:26.862778] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:28.442 [2024-05-15 15:57:26.862823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.442 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.442 [2024-05-15 15:57:26.937263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.701 [2024-05-15 15:57:27.010488] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.701 [2024-05-15 15:57:27.010520] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.701 [2024-05-15 15:57:27.010529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.701 [2024-05-15 15:57:27.010537] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.701 [2024-05-15 15:57:27.010543] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.701 [2024-05-15 15:57:27.010562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:29.270 15:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:29.529 true 00:19:29.529 15:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.529 15:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:29.529 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:29.529 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:29.529 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:29.788 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.788 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:29.788 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:29.788 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:29.788 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:30.048 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.048 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:30.308 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:30.308 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:30.308 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.308 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:30.308 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:30.308 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:30.308 15:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:30.567 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:30.567 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.827 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:30.827 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:30.827 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:30.827 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.827 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:31.086 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.UWj4JUVd22 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.qu9Ee7Royd 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.UWj4JUVd22 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.qu9Ee7Royd 00:19:31.087 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:31.346 15:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:31.605 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.UWj4JUVd22 00:19:31.605 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UWj4JUVd22 00:19:31.605 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.605 [2024-05-15 15:57:30.163653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.864 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.864 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:32.124 [2024-05-15 15:57:30.524542] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:32.124 [2024-05-15 15:57:30.524593] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.124 [2024-05-15 15:57:30.524787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.124 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:32.383 malloc0 00:19:32.383 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:32.383 15:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UWj4JUVd22 00:19:32.643 [2024-05-15 15:57:31.014033] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:32.643 15:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UWj4JUVd22 00:19:32.643 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.624 Initializing NVMe Controllers 00:19:42.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.624 Initialization complete. Launching workers. 00:19:42.624 ======================================================== 00:19:42.624 Latency(us) 00:19:42.624 Device Information : IOPS MiB/s Average min max 00:19:42.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16442.60 64.23 3892.76 819.63 5368.11 00:19:42.624 ======================================================== 00:19:42.624 Total : 16442.60 64.23 3892.76 819.63 5368.11 00:19:42.624 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UWj4JUVd22 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UWj4JUVd22' 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3784428 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3784428 /var/tmp/bdevperf.sock 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3784428 ']' 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:42.624 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.624 [2024-05-15 15:57:41.169626] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:42.624 [2024-05-15 15:57:41.169678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784428 ] 00:19:42.884 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.884 [2024-05-15 15:57:41.235715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.884 [2024-05-15 15:57:41.310303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.514 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:43.514 15:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:43.514 15:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UWj4JUVd22 00:19:43.776 [2024-05-15 15:57:42.116984] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.776 [2024-05-15 15:57:42.117058] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:43.776 TLSTESTn1 00:19:43.776 15:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:43.776 Running I/O for 10 seconds... 00:19:55.991 00:19:55.991 Latency(us) 00:19:55.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.992 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.992 Verification LBA range: start 0x0 length 0x2000 00:19:55.992 TLSTESTn1 : 10.07 1815.54 7.09 0.00 0.00 70301.15 5583.67 109890.76 00:19:55.992 =================================================================================================================== 00:19:55.992 Total : 1815.54 7.09 0.00 0.00 70301.15 5583.67 109890.76 00:19:55.992 0 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3784428 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3784428 ']' 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3784428 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3784428 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3784428' 00:19:55.992 killing process with pid 3784428 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3784428 00:19:55.992 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.992 00:19:55.992 Latency(us) 00:19:55.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.992 =================================================================================================================== 00:19:55.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.992 [2024-05-15 15:57:52.445514] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3784428 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qu9Ee7Royd 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qu9Ee7Royd 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qu9Ee7Royd 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qu9Ee7Royd' 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3786393 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3786393 /var/tmp/bdevperf.sock 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3786393 ']' 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:55.992 15:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.992 [2024-05-15 15:57:52.681296] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:55.992 [2024-05-15 15:57:52.681348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786393 ] 00:19:55.992 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.992 [2024-05-15 15:57:52.747953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.992 [2024-05-15 15:57:52.817822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qu9Ee7Royd 00:19:55.992 [2024-05-15 15:57:53.659857] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.992 [2024-05-15 15:57:53.659928] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:55.992 [2024-05-15 15:57:53.670948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:55.992 [2024-05-15 15:57:53.671335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05610 (107): Transport endpoint is not connected 00:19:55.992 [2024-05-15 15:57:53.672327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb05610 (9): Bad file descriptor 00:19:55.992 [2024-05-15 15:57:53.673329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.992 [2024-05-15 15:57:53.673342] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:55.992 [2024-05-15 15:57:53.673354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.992 request: 00:19:55.992 { 00:19:55.992 "name": "TLSTEST", 00:19:55.992 "trtype": "tcp", 00:19:55.992 "traddr": "10.0.0.2", 00:19:55.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.992 "adrfam": "ipv4", 00:19:55.992 "trsvcid": "4420", 00:19:55.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.992 "psk": "/tmp/tmp.qu9Ee7Royd", 00:19:55.992 "method": "bdev_nvme_attach_controller", 00:19:55.992 "req_id": 1 00:19:55.992 } 00:19:55.992 Got JSON-RPC error response 00:19:55.992 response: 00:19:55.992 { 00:19:55.992 "code": -32602, 00:19:55.992 "message": "Invalid parameters" 00:19:55.992 } 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3786393 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3786393 ']' 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3786393 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3786393 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3786393' 00:19:55.992 killing process with pid 3786393 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3786393 00:19:55.992 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.992 00:19:55.992 Latency(us) 00:19:55.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.992 =================================================================================================================== 00:19:55.992 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.992 [2024-05-15 15:57:53.754703] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3786393 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UWj4JUVd22 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UWj4JUVd22 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UWj4JUVd22 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UWj4JUVd22' 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3786566 00:19:55.992 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3786566 /var/tmp/bdevperf.sock 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3786566 ']' 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:55.993 15:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.993 [2024-05-15 15:57:53.996089] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:55.993 [2024-05-15 15:57:53.996141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786566 ] 00:19:55.993 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.993 [2024-05-15 15:57:54.063461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.993 [2024-05-15 15:57:54.130387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.251 15:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:56.251 15:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:56.251 15:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.UWj4JUVd22 00:19:56.510 [2024-05-15 15:57:54.940150] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.510 [2024-05-15 15:57:54.940237] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.510 [2024-05-15 15:57:54.944995] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:56.510 [2024-05-15 15:57:54.945017] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:56.510 [2024-05-15 15:57:54.945045] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.510 [2024-05-15 15:57:54.945717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a03610 (107): Transport endpoint is not connected 00:19:56.510 [2024-05-15 15:57:54.946707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a03610 (9): Bad file descriptor 00:19:56.510 [2024-05-15 15:57:54.947709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.510 [2024-05-15 15:57:54.947721] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.510 [2024-05-15 15:57:54.947732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.510 request: 00:19:56.510 { 00:19:56.510 "name": "TLSTEST", 00:19:56.510 "trtype": "tcp", 00:19:56.510 "traddr": "10.0.0.2", 00:19:56.510 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:56.510 "adrfam": "ipv4", 00:19:56.510 "trsvcid": "4420", 00:19:56.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.510 "psk": "/tmp/tmp.UWj4JUVd22", 00:19:56.510 "method": "bdev_nvme_attach_controller", 00:19:56.510 "req_id": 1 00:19:56.510 } 00:19:56.510 Got JSON-RPC error response 00:19:56.510 response: 00:19:56.510 { 00:19:56.510 "code": -32602, 00:19:56.510 "message": "Invalid parameters" 00:19:56.510 } 00:19:56.510 15:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3786566 00:19:56.510 15:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3786566 ']' 00:19:56.510 15:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3786566 00:19:56.510 15:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:56.510 15:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:56.510 15:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3786566 00:19:56.510 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:56.510 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:56.510 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3786566' 00:19:56.510 killing process with pid 3786566 00:19:56.510 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3786566 00:19:56.510 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.510 00:19:56.510 Latency(us) 00:19:56.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.510 =================================================================================================================== 00:19:56.510 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.511 [2024-05-15 15:57:55.018533] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:56.511 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3786566 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UWj4JUVd22 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UWj4JUVd22 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UWj4JUVd22 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UWj4JUVd22' 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3786836 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3786836 /var/tmp/bdevperf.sock 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3786836 ']' 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:56.769 15:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.769 [2024-05-15 15:57:55.261083] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:56.769 [2024-05-15 15:57:55.261134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786836 ] 00:19:56.769 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.769 [2024-05-15 15:57:55.328069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.059 [2024-05-15 15:57:55.396567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.626 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:57.626 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:57.626 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UWj4JUVd22 00:19:57.884 [2024-05-15 15:57:56.227022] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.884 [2024-05-15 15:57:56.227106] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:57.884 [2024-05-15 15:57:56.237365] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:57.884 [2024-05-15 15:57:56.237388] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:57.884 [2024-05-15 15:57:56.237415] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:57.884 [2024-05-15 15:57:56.238536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749610 (107): Transport endpoint is not connected 00:19:57.884 [2024-05-15 15:57:56.239528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749610 (9): Bad file descriptor 00:19:57.884 [2024-05-15 15:57:56.240529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:57.884 [2024-05-15 15:57:56.240542] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:57.884 [2024-05-15 15:57:56.240553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:57.884 request: 00:19:57.884 { 00:19:57.884 "name": "TLSTEST", 00:19:57.884 "trtype": "tcp", 00:19:57.884 "traddr": "10.0.0.2", 00:19:57.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.884 "adrfam": "ipv4", 00:19:57.884 "trsvcid": "4420", 00:19:57.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:57.884 "psk": "/tmp/tmp.UWj4JUVd22", 00:19:57.884 "method": "bdev_nvme_attach_controller", 00:19:57.884 "req_id": 1 00:19:57.884 } 00:19:57.884 Got JSON-RPC error response 00:19:57.884 response: 00:19:57.884 { 00:19:57.884 "code": -32602, 00:19:57.884 "message": "Invalid parameters" 00:19:57.884 } 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3786836 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3786836 ']' 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3786836 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3786836 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3786836' 00:19:57.884 killing process with pid 3786836 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3786836 00:19:57.884 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.884 00:19:57.884 Latency(us) 00:19:57.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.884 =================================================================================================================== 00:19:57.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.884 [2024-05-15 15:57:56.314846] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:57.884 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3786836 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3787115 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3787115 /var/tmp/bdevperf.sock 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3787115 ']' 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.144 15:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.144 [2024-05-15 15:57:56.555993] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:58.144 [2024-05-15 15:57:56.556046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787115 ] 00:19:58.144 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.144 [2024-05-15 15:57:56.622134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.144 [2024-05-15 15:57:56.689850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:59.082 [2024-05-15 15:57:57.530669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.082 [2024-05-15 15:57:57.532205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4cc0 (9): Bad file descriptor 00:19:59.082 [2024-05-15 15:57:57.533203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.082 [2024-05-15 15:57:57.533219] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.082 [2024-05-15 15:57:57.533231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.082 request: 00:19:59.082 { 00:19:59.082 "name": "TLSTEST", 00:19:59.082 "trtype": "tcp", 00:19:59.082 "traddr": "10.0.0.2", 00:19:59.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.082 "adrfam": "ipv4", 00:19:59.082 "trsvcid": "4420", 00:19:59.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.082 "method": "bdev_nvme_attach_controller", 00:19:59.082 "req_id": 1 00:19:59.082 } 00:19:59.082 Got JSON-RPC error response 00:19:59.082 response: 00:19:59.082 { 00:19:59.082 "code": -32602, 00:19:59.082 "message": "Invalid parameters" 00:19:59.082 } 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3787115 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3787115 ']' 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3787115 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3787115 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3787115' 00:19:59.082 killing process with pid 3787115 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3787115 00:19:59.082 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.082 00:19:59.082 Latency(us) 00:19:59.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.082 =================================================================================================================== 00:19:59.082 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.082 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3787115 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3781988 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3781988 ']' 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3781988 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3781988 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3781988' 00:19:59.341 killing process with pid 3781988 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3781988 00:19:59.341 [2024-05-15 15:57:57.855345] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:59.341 [2024-05-15 15:57:57.855375] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:59.341 15:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3781988 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.CrzWZyOINc 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.CrzWZyOINc 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3787399 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3787399 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3787399 ']' 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.601 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.861 [2024-05-15 15:57:58.173485] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:19:59.861 [2024-05-15 15:57:58.173533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.861 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.861 [2024-05-15 15:57:58.246631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.861 [2024-05-15 15:57:58.319367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.861 [2024-05-15 15:57:58.319402] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.861 [2024-05-15 15:57:58.319412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.861 [2024-05-15 15:57:58.319420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.861 [2024-05-15 15:57:58.319427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.861 [2024-05-15 15:57:58.319451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.430 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.430 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:00.430 15:57:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.430 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.430 15:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.689 15:57:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.689 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.CrzWZyOINc 00:20:00.689 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CrzWZyOINc 00:20:00.689 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.689 [2024-05-15 15:57:59.180980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.689 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.948 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.208 [2024-05-15 15:57:59.521833] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:01.208 [2024-05-15 15:57:59.521879] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.208 [2024-05-15 15:57:59.522064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.208 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.208 malloc0 00:20:01.208 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.467 15:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CrzWZyOINc 00:20:01.467 [2024-05-15 15:57:59.999300] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:01.467 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrzWZyOINc 00:20:01.467 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.467 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.467 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CrzWZyOINc' 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3787699 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3787699 /var/tmp/bdevperf.sock 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3787699 ']' 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:01.727 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.727 [2024-05-15 15:58:00.064497] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:01.727 [2024-05-15 15:58:00.064550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787699 ] 00:20:01.727 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.727 [2024-05-15 15:58:00.131937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.727 [2024-05-15 15:58:00.201354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.664 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:02.664 15:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:02.664 15:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CrzWZyOINc 00:20:02.664 [2024-05-15 15:58:01.011571] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.664 [2024-05-15 15:58:01.011653] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:02.664 TLSTESTn1 00:20:02.664 15:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.664 Running I/O for 10 seconds... 00:20:14.936 00:20:14.936 Latency(us) 00:20:14.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.936 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:14.936 Verification LBA range: start 0x0 length 0x2000 00:20:14.936 TLSTESTn1 : 10.06 1737.35 6.79 0.00 0.00 73474.99 6920.60 135056.59 00:20:14.936 =================================================================================================================== 00:20:14.936 Total : 1737.35 6.79 0.00 0.00 73474.99 6920.60 135056.59 00:20:14.936 0 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3787699 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3787699 ']' 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3787699 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3787699 00:20:14.936 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3787699' 00:20:14.937 killing process with pid 3787699 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3787699 00:20:14.937 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.937 00:20:14.937 Latency(us) 00:20:14.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.937 =================================================================================================================== 00:20:14.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.937 [2024-05-15 15:58:11.352404] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3787699 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.CrzWZyOINc 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrzWZyOINc 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrzWZyOINc 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrzWZyOINc 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CrzWZyOINc' 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3789638 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3789638 /var/tmp/bdevperf.sock 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3789638 ']' 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:14.937 15:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.937 [2024-05-15 15:58:11.608552] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:14.937 [2024-05-15 15:58:11.608607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789638 ] 00:20:14.937 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.937 [2024-05-15 15:58:11.676484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.937 [2024-05-15 15:58:11.746551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CrzWZyOINc 00:20:14.937 [2024-05-15 15:58:12.553152] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.937 [2024-05-15 15:58:12.553204] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:14.937 [2024-05-15 15:58:12.553213] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.CrzWZyOINc 00:20:14.937 request: 00:20:14.937 { 00:20:14.937 "name": "TLSTEST", 00:20:14.937 "trtype": "tcp", 00:20:14.937 "traddr": "10.0.0.2", 00:20:14.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.937 "adrfam": "ipv4", 00:20:14.937 "trsvcid": "4420", 00:20:14.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.937 "psk": "/tmp/tmp.CrzWZyOINc", 00:20:14.937 "method": "bdev_nvme_attach_controller", 00:20:14.937 "req_id": 1 00:20:14.937 } 00:20:14.937 Got JSON-RPC error response 00:20:14.937 response: 00:20:14.937 { 00:20:14.937 "code": -1, 00:20:14.937 "message": "Operation not permitted" 00:20:14.937 } 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3789638 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3789638 ']' 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3789638 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3789638 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3789638' 00:20:14.937 killing process with pid 3789638 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3789638 00:20:14.937 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.937 00:20:14.937 Latency(us) 00:20:14.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.937 =================================================================================================================== 00:20:14.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3789638 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3787399 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3787399 ']' 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3787399 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3787399 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3787399' 00:20:14.937 killing process with pid 3787399 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3787399 00:20:14.937 [2024-05-15 15:58:12.880404] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:14.937 [2024-05-15 15:58:12.880442] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:14.937 15:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3787399 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3789902 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3789902 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3789902 ']' 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:14.937 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.937 [2024-05-15 15:58:13.148601] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:14.938 [2024-05-15 15:58:13.148654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.938 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.938 [2024-05-15 15:58:13.222295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.938 [2024-05-15 15:58:13.289253] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.938 [2024-05-15 15:58:13.289296] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.938 [2024-05-15 15:58:13.289305] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.938 [2024-05-15 15:58:13.289313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.938 [2024-05-15 15:58:13.289320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.938 [2024-05-15 15:58:13.289342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.CrzWZyOINc 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CrzWZyOINc 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.CrzWZyOINc 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CrzWZyOINc 00:20:15.506 15:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:15.784 [2024-05-15 15:58:14.139753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.784 15:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:15.784 15:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:16.044 [2024-05-15 15:58:14.464566] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:16.044 [2024-05-15 15:58:14.464613] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.044 [2024-05-15 15:58:14.464797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.044 15:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:16.304 malloc0 00:20:16.304 15:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:16.304 15:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CrzWZyOINc 00:20:16.564 [2024-05-15 15:58:14.985867] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:16.564 [2024-05-15 15:58:14.985892] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:16.564 [2024-05-15 15:58:14.985917] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:16.564 request: 00:20:16.564 { 00:20:16.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.564 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.564 "psk": "/tmp/tmp.CrzWZyOINc", 00:20:16.564 "method": "nvmf_subsystem_add_host", 00:20:16.564 "req_id": 1 00:20:16.564 } 00:20:16.564 Got JSON-RPC error response 00:20:16.564 response: 00:20:16.564 { 00:20:16.564 "code": -32603, 00:20:16.564 "message": "Internal error" 00:20:16.564 } 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3789902 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3789902 ']' 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3789902 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3789902 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3789902' 00:20:16.564 killing process with pid 3789902 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3789902 00:20:16.564 [2024-05-15 15:58:15.061726] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:16.564 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3789902 00:20:16.824 15:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.CrzWZyOINc 00:20:16.824 15:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3790397 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3790397 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3790397 ']' 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.825 15:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.825 [2024-05-15 15:58:15.337242] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:16.825 [2024-05-15 15:58:15.337291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.825 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.085 [2024-05-15 15:58:15.411494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.085 [2024-05-15 15:58:15.473223] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.085 [2024-05-15 15:58:15.473265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.085 [2024-05-15 15:58:15.473274] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.085 [2024-05-15 15:58:15.473283] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.085 [2024-05-15 15:58:15.473290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.085 [2024-05-15 15:58:15.473310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.CrzWZyOINc 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CrzWZyOINc 00:20:17.654 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.913 [2024-05-15 15:58:16.327600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.913 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:18.172 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:18.172 [2024-05-15 15:58:16.652399] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:18.172 [2024-05-15 15:58:16.652451] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.172 [2024-05-15 15:58:16.652668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.172 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.432 malloc0 00:20:18.432 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.432 15:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CrzWZyOINc 00:20:18.691 [2024-05-15 15:58:17.121965] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3790690 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3790690 /var/tmp/bdevperf.sock 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3790690 ']' 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:18.691 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.691 [2024-05-15 15:58:17.182621] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:18.691 [2024-05-15 15:58:17.182670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790690 ] 00:20:18.691 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.691 [2024-05-15 15:58:17.246937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.951 [2024-05-15 15:58:17.315859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.519 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:19.519 15:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:19.519 15:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CrzWZyOINc 00:20:19.778 [2024-05-15 15:58:18.122564] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.778 [2024-05-15 15:58:18.122643] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.778 TLSTESTn1 00:20:19.778 15:58:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:20.038 15:58:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:20.038 "subsystems": [ 00:20:20.038 { 00:20:20.038 "subsystem": "keyring", 00:20:20.038 "config": [] 00:20:20.038 }, 00:20:20.038 { 00:20:20.038 "subsystem": "iobuf", 00:20:20.038 "config": [ 00:20:20.038 { 00:20:20.038 "method": "iobuf_set_options", 00:20:20.038 "params": { 00:20:20.038 "small_pool_count": 8192, 00:20:20.038 "large_pool_count": 1024, 00:20:20.038 "small_bufsize": 8192, 00:20:20.038 "large_bufsize": 135168 00:20:20.038 } 00:20:20.038 } 00:20:20.038 ] 00:20:20.038 }, 00:20:20.038 { 00:20:20.038 "subsystem": "sock", 00:20:20.038 "config": [ 00:20:20.038 { 00:20:20.038 "method": "sock_impl_set_options", 00:20:20.038 "params": { 00:20:20.038 "impl_name": "posix", 00:20:20.038 "recv_buf_size": 2097152, 00:20:20.038 "send_buf_size": 2097152, 00:20:20.038 "enable_recv_pipe": true, 00:20:20.038 "enable_quickack": false, 00:20:20.038 "enable_placement_id": 0, 00:20:20.038 "enable_zerocopy_send_server": true, 00:20:20.038 "enable_zerocopy_send_client": false, 00:20:20.038 "zerocopy_threshold": 0, 00:20:20.038 "tls_version": 0, 00:20:20.038 "enable_ktls": false 00:20:20.038 } 00:20:20.038 }, 00:20:20.038 { 00:20:20.038 "method": "sock_impl_set_options", 00:20:20.038 "params": { 00:20:20.038 "impl_name": "ssl", 00:20:20.038 "recv_buf_size": 4096, 00:20:20.038 "send_buf_size": 4096, 00:20:20.038 "enable_recv_pipe": true, 00:20:20.038 "enable_quickack": false, 00:20:20.038 "enable_placement_id": 0, 00:20:20.038 "enable_zerocopy_send_server": true, 00:20:20.038 "enable_zerocopy_send_client": false, 00:20:20.038 "zerocopy_threshold": 0, 00:20:20.038 "tls_version": 0, 00:20:20.038 "enable_ktls": false 00:20:20.038 } 00:20:20.038 } 00:20:20.038 ] 00:20:20.038 }, 00:20:20.038 { 00:20:20.038 "subsystem": "vmd", 00:20:20.038 "config": [] 00:20:20.038 }, 00:20:20.038 { 00:20:20.038 "subsystem": "accel", 00:20:20.038 "config": [ 00:20:20.038 { 00:20:20.039 "method": "accel_set_options", 00:20:20.039 "params": { 00:20:20.039 "small_cache_size": 128, 00:20:20.039 "large_cache_size": 16, 00:20:20.039 "task_count": 2048, 00:20:20.039 "sequence_count": 2048, 00:20:20.039 "buf_count": 2048 00:20:20.039 } 00:20:20.039 } 00:20:20.039 ] 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "subsystem": "bdev", 00:20:20.039 "config": [ 00:20:20.039 { 00:20:20.039 "method": "bdev_set_options", 00:20:20.039 "params": { 00:20:20.039 "bdev_io_pool_size": 65535, 00:20:20.039 "bdev_io_cache_size": 256, 00:20:20.039 "bdev_auto_examine": true, 00:20:20.039 "iobuf_small_cache_size": 128, 00:20:20.039 "iobuf_large_cache_size": 16 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "bdev_raid_set_options", 00:20:20.039 "params": { 00:20:20.039 "process_window_size_kb": 1024 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "bdev_iscsi_set_options", 00:20:20.039 "params": { 00:20:20.039 "timeout_sec": 30 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "bdev_nvme_set_options", 00:20:20.039 "params": { 00:20:20.039 "action_on_timeout": "none", 00:20:20.039 "timeout_us": 0, 00:20:20.039 "timeout_admin_us": 0, 00:20:20.039 "keep_alive_timeout_ms": 10000, 00:20:20.039 "arbitration_burst": 0, 00:20:20.039 "low_priority_weight": 0, 00:20:20.039 "medium_priority_weight": 0, 00:20:20.039 "high_priority_weight": 0, 00:20:20.039 "nvme_adminq_poll_period_us": 10000, 00:20:20.039 "nvme_ioq_poll_period_us": 0, 00:20:20.039 "io_queue_requests": 0, 00:20:20.039 "delay_cmd_submit": true, 00:20:20.039 "transport_retry_count": 4, 00:20:20.039 "bdev_retry_count": 3, 00:20:20.039 "transport_ack_timeout": 0, 00:20:20.039 "ctrlr_loss_timeout_sec": 0, 00:20:20.039 "reconnect_delay_sec": 0, 00:20:20.039 "fast_io_fail_timeout_sec": 0, 00:20:20.039 "disable_auto_failback": false, 00:20:20.039 "generate_uuids": false, 00:20:20.039 "transport_tos": 0, 00:20:20.039 "nvme_error_stat": false, 00:20:20.039 "rdma_srq_size": 0, 00:20:20.039 "io_path_stat": false, 00:20:20.039 "allow_accel_sequence": false, 00:20:20.039 "rdma_max_cq_size": 0, 00:20:20.039 "rdma_cm_event_timeout_ms": 0, 00:20:20.039 "dhchap_digests": [ 00:20:20.039 "sha256", 00:20:20.039 "sha384", 00:20:20.039 "sha512" 00:20:20.039 ], 00:20:20.039 "dhchap_dhgroups": [ 00:20:20.039 "null", 00:20:20.039 "ffdhe2048", 00:20:20.039 "ffdhe3072", 00:20:20.039 "ffdhe4096", 00:20:20.039 "ffdhe6144", 00:20:20.039 "ffdhe8192" 00:20:20.039 ] 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "bdev_nvme_set_hotplug", 00:20:20.039 "params": { 00:20:20.039 "period_us": 100000, 00:20:20.039 "enable": false 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "bdev_malloc_create", 00:20:20.039 "params": { 00:20:20.039 "name": "malloc0", 00:20:20.039 "num_blocks": 8192, 00:20:20.039 "block_size": 4096, 00:20:20.039 "physical_block_size": 4096, 00:20:20.039 "uuid": "6ab5fb68-5882-40a0-ab2b-e9437e003274", 00:20:20.039 "optimal_io_boundary": 0 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "bdev_wait_for_examine" 00:20:20.039 } 00:20:20.039 ] 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "subsystem": "nbd", 00:20:20.039 "config": [] 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "subsystem": "scheduler", 00:20:20.039 "config": [ 00:20:20.039 { 00:20:20.039 "method": "framework_set_scheduler", 00:20:20.039 "params": { 00:20:20.039 "name": "static" 00:20:20.039 } 00:20:20.039 } 00:20:20.039 ] 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "subsystem": "nvmf", 00:20:20.039 "config": [ 00:20:20.039 { 00:20:20.039 "method": "nvmf_set_config", 00:20:20.039 "params": { 00:20:20.039 "discovery_filter": "match_any", 00:20:20.039 "admin_cmd_passthru": { 00:20:20.039 "identify_ctrlr": false 00:20:20.039 } 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "nvmf_set_max_subsystems", 00:20:20.039 "params": { 00:20:20.039 "max_subsystems": 1024 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "nvmf_set_crdt", 00:20:20.039 "params": { 00:20:20.039 "crdt1": 0, 00:20:20.039 "crdt2": 0, 00:20:20.039 "crdt3": 0 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "nvmf_create_transport", 00:20:20.039 "params": { 00:20:20.039 "trtype": "TCP", 00:20:20.039 "max_queue_depth": 128, 00:20:20.039 "max_io_qpairs_per_ctrlr": 127, 00:20:20.039 "in_capsule_data_size": 4096, 00:20:20.039 "max_io_size": 131072, 00:20:20.039 "io_unit_size": 131072, 00:20:20.039 "max_aq_depth": 128, 00:20:20.039 "num_shared_buffers": 511, 00:20:20.039 "buf_cache_size": 4294967295, 00:20:20.039 "dif_insert_or_strip": false, 00:20:20.039 "zcopy": false, 00:20:20.039 "c2h_success": false, 00:20:20.039 "sock_priority": 0, 00:20:20.039 "abort_timeout_sec": 1, 00:20:20.039 "ack_timeout": 0, 00:20:20.039 "data_wr_pool_size": 0 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "nvmf_create_subsystem", 00:20:20.039 "params": { 00:20:20.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.039 "allow_any_host": false, 00:20:20.039 "serial_number": "SPDK00000000000001", 00:20:20.039 "model_number": "SPDK bdev Controller", 00:20:20.039 "max_namespaces": 10, 00:20:20.039 "min_cntlid": 1, 00:20:20.039 "max_cntlid": 65519, 00:20:20.039 "ana_reporting": false 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "nvmf_subsystem_add_host", 00:20:20.039 "params": { 00:20:20.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.039 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.039 "psk": "/tmp/tmp.CrzWZyOINc" 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "nvmf_subsystem_add_ns", 00:20:20.039 "params": { 00:20:20.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.039 "namespace": { 00:20:20.039 "nsid": 1, 00:20:20.039 "bdev_name": "malloc0", 00:20:20.039 "nguid": "6AB5FB68588240A0AB2BE9437E003274", 00:20:20.039 "uuid": "6ab5fb68-5882-40a0-ab2b-e9437e003274", 00:20:20.039 "no_auto_visible": false 00:20:20.039 } 00:20:20.039 } 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "method": "nvmf_subsystem_add_listener", 00:20:20.039 "params": { 00:20:20.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.039 "listen_address": { 00:20:20.039 "trtype": "TCP", 00:20:20.039 "adrfam": "IPv4", 00:20:20.039 "traddr": "10.0.0.2", 00:20:20.039 "trsvcid": "4420" 00:20:20.039 }, 00:20:20.039 "secure_channel": true 00:20:20.039 } 00:20:20.039 } 00:20:20.039 ] 00:20:20.039 } 00:20:20.039 ] 00:20:20.039 }' 00:20:20.040 15:58:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:20.300 "subsystems": [ 00:20:20.300 { 00:20:20.300 "subsystem": "keyring", 00:20:20.300 "config": [] 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "subsystem": "iobuf", 00:20:20.300 "config": [ 00:20:20.300 { 00:20:20.300 "method": "iobuf_set_options", 00:20:20.300 "params": { 00:20:20.300 "small_pool_count": 8192, 00:20:20.300 "large_pool_count": 1024, 00:20:20.300 "small_bufsize": 8192, 00:20:20.300 "large_bufsize": 135168 00:20:20.300 } 00:20:20.300 } 00:20:20.300 ] 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "subsystem": "sock", 00:20:20.300 "config": [ 00:20:20.300 { 00:20:20.300 "method": "sock_impl_set_options", 00:20:20.300 "params": { 00:20:20.300 "impl_name": "posix", 00:20:20.300 "recv_buf_size": 2097152, 00:20:20.300 "send_buf_size": 2097152, 00:20:20.300 "enable_recv_pipe": true, 00:20:20.300 "enable_quickack": false, 00:20:20.300 "enable_placement_id": 0, 00:20:20.300 "enable_zerocopy_send_server": true, 00:20:20.300 "enable_zerocopy_send_client": false, 00:20:20.300 "zerocopy_threshold": 0, 00:20:20.300 "tls_version": 0, 00:20:20.300 "enable_ktls": false 00:20:20.300 } 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "method": "sock_impl_set_options", 00:20:20.300 "params": { 00:20:20.300 "impl_name": "ssl", 00:20:20.300 "recv_buf_size": 4096, 00:20:20.300 "send_buf_size": 4096, 00:20:20.300 "enable_recv_pipe": true, 00:20:20.300 "enable_quickack": false, 00:20:20.300 "enable_placement_id": 0, 00:20:20.300 "enable_zerocopy_send_server": true, 00:20:20.300 "enable_zerocopy_send_client": false, 00:20:20.300 "zerocopy_threshold": 0, 00:20:20.300 "tls_version": 0, 00:20:20.300 "enable_ktls": false 00:20:20.300 } 00:20:20.300 } 00:20:20.300 ] 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "subsystem": "vmd", 00:20:20.300 "config": [] 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "subsystem": "accel", 00:20:20.300 "config": [ 00:20:20.300 { 00:20:20.300 "method": "accel_set_options", 00:20:20.300 "params": { 00:20:20.300 "small_cache_size": 128, 00:20:20.300 "large_cache_size": 16, 00:20:20.300 "task_count": 2048, 00:20:20.300 "sequence_count": 2048, 00:20:20.300 "buf_count": 2048 00:20:20.300 } 00:20:20.300 } 00:20:20.300 ] 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "subsystem": "bdev", 00:20:20.300 "config": [ 00:20:20.300 { 00:20:20.300 "method": "bdev_set_options", 00:20:20.300 "params": { 00:20:20.300 "bdev_io_pool_size": 65535, 00:20:20.300 "bdev_io_cache_size": 256, 00:20:20.300 "bdev_auto_examine": true, 00:20:20.300 "iobuf_small_cache_size": 128, 00:20:20.300 "iobuf_large_cache_size": 16 00:20:20.300 } 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "method": "bdev_raid_set_options", 00:20:20.300 "params": { 00:20:20.300 "process_window_size_kb": 1024 00:20:20.300 } 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "method": "bdev_iscsi_set_options", 00:20:20.300 "params": { 00:20:20.300 "timeout_sec": 30 00:20:20.300 } 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "method": "bdev_nvme_set_options", 00:20:20.300 "params": { 00:20:20.300 "action_on_timeout": "none", 00:20:20.300 "timeout_us": 0, 00:20:20.300 "timeout_admin_us": 0, 00:20:20.300 "keep_alive_timeout_ms": 10000, 00:20:20.300 "arbitration_burst": 0, 00:20:20.300 "low_priority_weight": 0, 00:20:20.300 "medium_priority_weight": 0, 00:20:20.300 "high_priority_weight": 0, 00:20:20.300 "nvme_adminq_poll_period_us": 10000, 00:20:20.300 "nvme_ioq_poll_period_us": 0, 00:20:20.300 "io_queue_requests": 512, 00:20:20.300 "delay_cmd_submit": true, 00:20:20.300 "transport_retry_count": 4, 00:20:20.300 "bdev_retry_count": 3, 00:20:20.300 "transport_ack_timeout": 0, 00:20:20.300 "ctrlr_loss_timeout_sec": 0, 00:20:20.300 "reconnect_delay_sec": 0, 00:20:20.300 "fast_io_fail_timeout_sec": 0, 00:20:20.300 "disable_auto_failback": false, 00:20:20.300 "generate_uuids": false, 00:20:20.300 "transport_tos": 0, 00:20:20.300 "nvme_error_stat": false, 00:20:20.300 "rdma_srq_size": 0, 00:20:20.300 "io_path_stat": false, 00:20:20.300 "allow_accel_sequence": false, 00:20:20.300 "rdma_max_cq_size": 0, 00:20:20.300 "rdma_cm_event_timeout_ms": 0, 00:20:20.300 "dhchap_digests": [ 00:20:20.300 "sha256", 00:20:20.300 "sha384", 00:20:20.300 "sha512" 00:20:20.300 ], 00:20:20.300 "dhchap_dhgroups": [ 00:20:20.300 "null", 00:20:20.300 "ffdhe2048", 00:20:20.300 "ffdhe3072", 00:20:20.300 "ffdhe4096", 00:20:20.300 "ffdhe6144", 00:20:20.300 "ffdhe8192" 00:20:20.300 ] 00:20:20.300 } 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "method": "bdev_nvme_attach_controller", 00:20:20.300 "params": { 00:20:20.300 "name": "TLSTEST", 00:20:20.300 "trtype": "TCP", 00:20:20.300 "adrfam": "IPv4", 00:20:20.300 "traddr": "10.0.0.2", 00:20:20.300 "trsvcid": "4420", 00:20:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.300 "prchk_reftag": false, 00:20:20.300 "prchk_guard": false, 00:20:20.300 "ctrlr_loss_timeout_sec": 0, 00:20:20.300 "reconnect_delay_sec": 0, 00:20:20.300 "fast_io_fail_timeout_sec": 0, 00:20:20.300 "psk": "/tmp/tmp.CrzWZyOINc", 00:20:20.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.300 "hdgst": false, 00:20:20.300 "ddgst": false 00:20:20.300 } 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "method": "bdev_nvme_set_hotplug", 00:20:20.300 "params": { 00:20:20.300 "period_us": 100000, 00:20:20.300 "enable": false 00:20:20.300 } 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "method": "bdev_wait_for_examine" 00:20:20.300 } 00:20:20.300 ] 00:20:20.300 }, 00:20:20.300 { 00:20:20.300 "subsystem": "nbd", 00:20:20.300 "config": [] 00:20:20.300 } 00:20:20.300 ] 00:20:20.300 }' 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3790690 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3790690 ']' 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3790690 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3790690 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3790690' 00:20:20.300 killing process with pid 3790690 00:20:20.300 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3790690 00:20:20.300 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.300 00:20:20.300 Latency(us) 00:20:20.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.301 =================================================================================================================== 00:20:20.301 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.301 [2024-05-15 15:58:18.780338] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.301 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3790690 00:20:20.560 15:58:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3790397 00:20:20.560 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3790397 ']' 00:20:20.560 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3790397 00:20:20.560 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:20.560 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.560 15:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3790397 00:20:20.561 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:20.561 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:20.561 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3790397' 00:20:20.561 killing process with pid 3790397 00:20:20.561 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3790397 00:20:20.561 [2024-05-15 15:58:19.033569] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:20.561 [2024-05-15 15:58:19.033604] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:20.561 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3790397 00:20:20.820 15:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:20.820 15:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.820 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:20.820 15:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:20.820 "subsystems": [ 00:20:20.820 { 00:20:20.820 "subsystem": "keyring", 00:20:20.820 "config": [] 00:20:20.820 }, 00:20:20.820 { 00:20:20.820 "subsystem": "iobuf", 00:20:20.820 "config": [ 00:20:20.820 { 00:20:20.820 "method": "iobuf_set_options", 00:20:20.820 "params": { 00:20:20.820 "small_pool_count": 8192, 00:20:20.820 "large_pool_count": 1024, 00:20:20.820 "small_bufsize": 8192, 00:20:20.820 "large_bufsize": 135168 00:20:20.820 } 00:20:20.820 } 00:20:20.820 ] 00:20:20.820 }, 00:20:20.820 { 00:20:20.820 "subsystem": "sock", 00:20:20.820 "config": [ 00:20:20.820 { 00:20:20.820 "method": "sock_impl_set_options", 00:20:20.820 "params": { 00:20:20.820 "impl_name": "posix", 00:20:20.820 "recv_buf_size": 2097152, 00:20:20.820 "send_buf_size": 2097152, 00:20:20.820 "enable_recv_pipe": true, 00:20:20.820 "enable_quickack": false, 00:20:20.820 "enable_placement_id": 0, 00:20:20.821 "enable_zerocopy_send_server": true, 00:20:20.821 "enable_zerocopy_send_client": false, 00:20:20.821 "zerocopy_threshold": 0, 00:20:20.821 "tls_version": 0, 00:20:20.821 "enable_ktls": false 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "sock_impl_set_options", 00:20:20.821 "params": { 00:20:20.821 "impl_name": "ssl", 00:20:20.821 "recv_buf_size": 4096, 00:20:20.821 "send_buf_size": 4096, 00:20:20.821 "enable_recv_pipe": true, 00:20:20.821 "enable_quickack": false, 00:20:20.821 "enable_placement_id": 0, 00:20:20.821 "enable_zerocopy_send_server": true, 00:20:20.821 "enable_zerocopy_send_client": false, 00:20:20.821 "zerocopy_threshold": 0, 00:20:20.821 "tls_version": 0, 00:20:20.821 "enable_ktls": false 00:20:20.821 } 00:20:20.821 } 00:20:20.821 ] 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "subsystem": "vmd", 00:20:20.821 "config": [] 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "subsystem": "accel", 00:20:20.821 "config": [ 00:20:20.821 { 00:20:20.821 "method": "accel_set_options", 00:20:20.821 "params": { 00:20:20.821 "small_cache_size": 128, 00:20:20.821 "large_cache_size": 16, 00:20:20.821 "task_count": 2048, 00:20:20.821 "sequence_count": 2048, 00:20:20.821 "buf_count": 2048 00:20:20.821 } 00:20:20.821 } 00:20:20.821 ] 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "subsystem": "bdev", 00:20:20.821 "config": [ 00:20:20.821 { 00:20:20.821 "method": "bdev_set_options", 00:20:20.821 "params": { 00:20:20.821 "bdev_io_pool_size": 65535, 00:20:20.821 "bdev_io_cache_size": 256, 00:20:20.821 "bdev_auto_examine": true, 00:20:20.821 "iobuf_small_cache_size": 128, 00:20:20.821 "iobuf_large_cache_size": 16 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "bdev_raid_set_options", 00:20:20.821 "params": { 00:20:20.821 "process_window_size_kb": 1024 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "bdev_iscsi_set_options", 00:20:20.821 "params": { 00:20:20.821 "timeout_sec": 30 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "bdev_nvme_set_options", 00:20:20.821 "params": { 00:20:20.821 "action_on_timeout": "none", 00:20:20.821 "timeout_us": 0, 00:20:20.821 "timeout_admin_us": 0, 00:20:20.821 "keep_alive_timeout_ms": 10000, 00:20:20.821 "arbitration_burst": 0, 00:20:20.821 "low_priority_weight": 0, 00:20:20.821 "medium_priority_weight": 0, 00:20:20.821 "high_priority_weight": 0, 00:20:20.821 "nvme_adminq_poll_period_us": 10000, 00:20:20.821 "nvme_ioq_poll_period_us": 0, 00:20:20.821 "io_queue_requests": 0, 00:20:20.821 "delay_cmd_submit": true, 00:20:20.821 "transport_retry_count": 4, 00:20:20.821 "bdev_retry_count": 3, 00:20:20.821 "transport_ack_timeout": 0, 00:20:20.821 "ctrlr_loss_timeout_sec": 0, 00:20:20.821 "reconnect_delay_sec": 0, 00:20:20.821 "fast_io_fail_timeout_sec": 0, 00:20:20.821 "disable_auto_failback": false, 00:20:20.821 "generate_uuids": false, 00:20:20.821 "transport_tos": 0, 00:20:20.821 "nvme_error_stat": false, 00:20:20.821 "rdma_srq_size": 0, 00:20:20.821 "io_path_stat": false, 00:20:20.821 "allow_accel_sequence": false, 00:20:20.821 "rdma_max_cq_size": 0, 00:20:20.821 "rdma_cm_event_timeout_ms": 0, 00:20:20.821 "dhchap_digests": [ 00:20:20.821 "sha256", 00:20:20.821 "sha384", 00:20:20.821 "sha512" 00:20:20.821 ], 00:20:20.821 "dhchap_dhgroups": [ 00:20:20.821 "null", 00:20:20.821 "ffdhe2048", 00:20:20.821 "ffdhe3072", 00:20:20.821 "ffdhe4096", 00:20:20.821 "ffdhe6144", 00:20:20.821 "ffdhe8192" 00:20:20.821 ] 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "bdev_nvme_set_hotplug", 00:20:20.821 "params": { 00:20:20.821 "period_us": 100000, 00:20:20.821 "enable": false 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "bdev_malloc_create", 00:20:20.821 "params": { 00:20:20.821 "name": "malloc0", 00:20:20.821 "num_blocks": 8192, 00:20:20.821 "block_size": 4096, 00:20:20.821 "physical_block_size": 4096, 00:20:20.821 "uuid": "6ab5fb68-5882-40a0-ab2b-e9437e003274", 00:20:20.821 "optimal_io_boundary": 0 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "bdev_wait_for_examine" 00:20:20.821 } 00:20:20.821 ] 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "subsystem": "nbd", 00:20:20.821 "config": [] 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "subsystem": "scheduler", 00:20:20.821 "config": [ 00:20:20.821 { 00:20:20.821 "method": "framework_set_scheduler", 00:20:20.821 "params": { 00:20:20.821 "name": "static" 00:20:20.821 } 00:20:20.821 } 00:20:20.821 ] 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "subsystem": "nvmf", 00:20:20.821 "config": [ 00:20:20.821 { 00:20:20.821 "method": "nvmf_set_config", 00:20:20.821 "params": { 00:20:20.821 "discovery_filter": "match_any", 00:20:20.821 "admin_cmd_passthru": { 00:20:20.821 "identify_ctrlr": false 00:20:20.821 } 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "nvmf_set_max_subsystems", 00:20:20.821 "params": { 00:20:20.821 "max_subsystems": 1024 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "nvmf_set_crdt", 00:20:20.821 "params": { 00:20:20.821 "crdt1": 0, 00:20:20.821 "crdt2": 0, 00:20:20.821 "crdt3": 0 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "nvmf_create_transport", 00:20:20.821 "params": { 00:20:20.821 "trtype": "TCP", 00:20:20.821 "max_queue_depth": 128, 00:20:20.821 "max_io_qpairs_per_ctrlr": 127, 00:20:20.821 "in_capsule_data_size": 4096, 00:20:20.821 "max_io_size": 131072, 00:20:20.821 "io_unit_size": 131072, 00:20:20.821 "max_aq_depth": 128, 00:20:20.821 "num_shared_buffers": 511, 00:20:20.821 "buf_cache_size": 4294967295, 00:20:20.821 "dif_insert_or_strip": false, 00:20:20.821 "zcopy": false, 00:20:20.821 "c2h_success": false, 00:20:20.821 "sock_priority": 0, 00:20:20.821 "abort_timeout_sec": 1, 00:20:20.821 "ack_timeout": 0, 00:20:20.821 "data_wr_pool_size": 0 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "nvmf_create_subsystem", 00:20:20.821 "params": { 00:20:20.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.821 "allow_any_host": false, 00:20:20.821 "serial_number": "SPDK00000000000001", 00:20:20.821 "model_number": "SPDK bdev Controller", 00:20:20.821 "max_namespaces": 10, 00:20:20.821 "min_cntlid": 1, 00:20:20.821 "max_cntlid": 65519, 00:20:20.821 "ana_reporting": false 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "nvmf_subsystem_add_host", 00:20:20.821 "params": { 00:20:20.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.821 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.821 "psk": "/tmp/tmp.CrzWZyOINc" 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "nvmf_subsystem_add_ns", 00:20:20.821 "params": { 00:20:20.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.821 "namespace": { 00:20:20.821 "nsid": 1, 00:20:20.821 "bdev_name": "malloc0", 00:20:20.821 "nguid": "6AB5FB68588240A0AB2BE9437E003274", 00:20:20.821 "uuid": "6ab5fb68-5882-40a0-ab2b-e9437e003274", 00:20:20.821 "no_auto_visible": false 00:20:20.821 } 00:20:20.821 } 00:20:20.821 }, 00:20:20.821 { 00:20:20.821 "method": "nvmf_subsystem_add_listener", 00:20:20.822 "params": { 00:20:20.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.822 "listen_address": { 00:20:20.822 "trtype": "TCP", 00:20:20.822 "adrfam": "IPv4", 00:20:20.822 "traddr": "10.0.0.2", 00:20:20.822 "trsvcid": "4420" 00:20:20.822 }, 00:20:20.822 "secure_channel": true 00:20:20.822 } 00:20:20.822 } 00:20:20.822 ] 00:20:20.822 } 00:20:20.822 ] 00:20:20.822 }' 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3790992 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3790992 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3790992 ']' 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:20.822 15:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.822 [2024-05-15 15:58:19.307106] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:20.822 [2024-05-15 15:58:19.307157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.822 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.822 [2024-05-15 15:58:19.378525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.081 [2024-05-15 15:58:19.451261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.081 [2024-05-15 15:58:19.451298] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.081 [2024-05-15 15:58:19.451307] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.081 [2024-05-15 15:58:19.451316] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.081 [2024-05-15 15:58:19.451323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.081 [2024-05-15 15:58:19.451386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.340 [2024-05-15 15:58:19.646403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.340 [2024-05-15 15:58:19.662376] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.340 [2024-05-15 15:58:19.678407] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:21.340 [2024-05-15 15:58:19.678448] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.341 [2024-05-15 15:58:19.690577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3791262 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3791262 /var/tmp/bdevperf.sock 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3791262 ']' 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.600 15:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:21.600 "subsystems": [ 00:20:21.600 { 00:20:21.600 "subsystem": "keyring", 00:20:21.600 "config": [] 00:20:21.600 }, 00:20:21.600 { 00:20:21.600 "subsystem": "iobuf", 00:20:21.600 "config": [ 00:20:21.600 { 00:20:21.600 "method": "iobuf_set_options", 00:20:21.600 "params": { 00:20:21.600 "small_pool_count": 8192, 00:20:21.600 "large_pool_count": 1024, 00:20:21.600 "small_bufsize": 8192, 00:20:21.600 "large_bufsize": 135168 00:20:21.600 } 00:20:21.600 } 00:20:21.600 ] 00:20:21.600 }, 00:20:21.600 { 00:20:21.600 "subsystem": "sock", 00:20:21.600 "config": [ 00:20:21.600 { 00:20:21.600 "method": "sock_impl_set_options", 00:20:21.600 "params": { 00:20:21.600 "impl_name": "posix", 00:20:21.600 "recv_buf_size": 2097152, 00:20:21.600 "send_buf_size": 2097152, 00:20:21.600 "enable_recv_pipe": true, 00:20:21.600 "enable_quickack": false, 00:20:21.600 "enable_placement_id": 0, 00:20:21.600 "enable_zerocopy_send_server": true, 00:20:21.600 "enable_zerocopy_send_client": false, 00:20:21.600 "zerocopy_threshold": 0, 00:20:21.600 "tls_version": 0, 00:20:21.600 "enable_ktls": false 00:20:21.600 } 00:20:21.600 }, 00:20:21.600 { 00:20:21.600 "method": "sock_impl_set_options", 00:20:21.600 "params": { 00:20:21.600 "impl_name": "ssl", 00:20:21.600 "recv_buf_size": 4096, 00:20:21.600 "send_buf_size": 4096, 00:20:21.600 "enable_recv_pipe": true, 00:20:21.600 "enable_quickack": false, 00:20:21.600 "enable_placement_id": 0, 00:20:21.600 "enable_zerocopy_send_server": true, 00:20:21.600 "enable_zerocopy_send_client": false, 00:20:21.600 "zerocopy_threshold": 0, 00:20:21.600 "tls_version": 0, 00:20:21.600 "enable_ktls": false 00:20:21.600 } 00:20:21.600 } 00:20:21.600 ] 00:20:21.600 }, 00:20:21.600 { 00:20:21.600 "subsystem": "vmd", 00:20:21.600 "config": [] 00:20:21.600 }, 00:20:21.600 { 00:20:21.600 "subsystem": "accel", 00:20:21.600 "config": [ 00:20:21.600 { 00:20:21.600 "method": "accel_set_options", 00:20:21.600 "params": { 00:20:21.600 "small_cache_size": 128, 00:20:21.600 "large_cache_size": 16, 00:20:21.600 "task_count": 2048, 00:20:21.600 "sequence_count": 2048, 00:20:21.600 "buf_count": 2048 00:20:21.601 } 00:20:21.601 } 00:20:21.601 ] 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "subsystem": "bdev", 00:20:21.601 "config": [ 00:20:21.601 { 00:20:21.601 "method": "bdev_set_options", 00:20:21.601 "params": { 00:20:21.601 "bdev_io_pool_size": 65535, 00:20:21.601 "bdev_io_cache_size": 256, 00:20:21.601 "bdev_auto_examine": true, 00:20:21.601 "iobuf_small_cache_size": 128, 00:20:21.601 "iobuf_large_cache_size": 16 00:20:21.601 } 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "method": "bdev_raid_set_options", 00:20:21.601 "params": { 00:20:21.601 "process_window_size_kb": 1024 00:20:21.601 } 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "method": "bdev_iscsi_set_options", 00:20:21.601 "params": { 00:20:21.601 "timeout_sec": 30 00:20:21.601 } 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "method": "bdev_nvme_set_options", 00:20:21.601 "params": { 00:20:21.601 "action_on_timeout": "none", 00:20:21.601 "timeout_us": 0, 00:20:21.601 "timeout_admin_us": 0, 00:20:21.601 "keep_alive_timeout_ms": 10000, 00:20:21.601 "arbitration_burst": 0, 00:20:21.601 "low_priority_weight": 0, 00:20:21.601 "medium_priority_weight": 0, 00:20:21.601 "high_priority_weight": 0, 00:20:21.601 "nvme_adminq_poll_period_us": 10000, 00:20:21.601 "nvme_ioq_poll_period_us": 0, 00:20:21.601 "io_queue_requests": 512, 00:20:21.601 "delay_cmd_submit": true, 00:20:21.601 "transport_retry_count": 4, 00:20:21.601 "bdev_retry_count": 3, 00:20:21.601 "transport_ack_timeout": 0, 00:20:21.601 "ctrlr_loss_timeout_sec": 0, 00:20:21.601 "reconnect_delay_sec": 0, 00:20:21.601 "fast_io_fail_timeout_sec": 0, 00:20:21.601 "disable_auto_failback": false, 00:20:21.601 "generate_uuids": false, 00:20:21.601 "transport_tos": 0, 00:20:21.601 "nvme_error_stat": false, 00:20:21.601 "rdma_srq_size": 0, 00:20:21.601 "io_path_stat": false, 00:20:21.601 "allow_accel_sequence": false, 00:20:21.601 "rdma_max_cq_size": 0, 00:20:21.601 "rdma_cm_event_timeout_ms": 0, 00:20:21.601 "dhchap_digests": [ 00:20:21.601 "sha256", 00:20:21.601 "sha384", 00:20:21.601 "sha512" 00:20:21.601 ], 00:20:21.601 "dhchap_dhgroups": [ 00:20:21.601 "null", 00:20:21.601 "ffdhe2048", 00:20:21.601 "ffdhe3072", 00:20:21.601 "ffdhe4096", 00:20:21.601 "ffdhe6144", 00:20:21.601 "ffdhe8192" 00:20:21.601 ] 00:20:21.601 } 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "method": "bdev_nvme_attach_controller", 00:20:21.601 "params": { 00:20:21.601 "name": "TLSTEST", 00:20:21.601 "trtype": "TCP", 00:20:21.601 "adrfam": "IPv4", 00:20:21.601 "traddr": "10.0.0.2", 00:20:21.601 "trsvcid": "4420", 00:20:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.601 "prchk_reftag": false, 00:20:21.601 "prchk_guard": false, 00:20:21.601 "ctrlr_loss_timeout_sec": 0, 00:20:21.601 "reconnect_delay_sec": 0, 00:20:21.601 "fast_io_fail_timeout_sec": 0, 00:20:21.601 "psk": "/tmp/tmp.CrzWZyOINc", 00:20:21.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.601 "hdgst": false, 00:20:21.601 "ddgst": false 00:20:21.601 } 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "method": "bdev_nvme_set_hotplug", 00:20:21.601 "params": { 00:20:21.601 "period_us": 100000, 00:20:21.601 "enable": false 00:20:21.601 } 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "method": "bdev_wait_for_examine" 00:20:21.601 } 00:20:21.601 ] 00:20:21.601 }, 00:20:21.601 { 00:20:21.601 "subsystem": "nbd", 00:20:21.601 "config": [] 00:20:21.601 } 00:20:21.601 ] 00:20:21.601 }' 00:20:21.601 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:21.601 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.861 [2024-05-15 15:58:20.186778] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:21.861 [2024-05-15 15:58:20.186834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3791262 ] 00:20:21.861 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.861 [2024-05-15 15:58:20.253989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.861 [2024-05-15 15:58:20.324259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.120 [2024-05-15 15:58:20.458543] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.120 [2024-05-15 15:58:20.458628] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:22.688 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:22.688 15:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:22.688 15:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:22.688 Running I/O for 10 seconds... 00:20:32.683 00:20:32.683 Latency(us) 00:20:32.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.683 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.683 Verification LBA range: start 0x0 length 0x2000 00:20:32.683 TLSTESTn1 : 10.08 1748.92 6.83 0.00 0.00 72930.32 5295.31 114923.93 00:20:32.683 =================================================================================================================== 00:20:32.683 Total : 1748.92 6.83 0.00 0.00 72930.32 5295.31 114923.93 00:20:32.683 0 00:20:32.683 15:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.683 15:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3791262 00:20:32.683 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3791262 ']' 00:20:32.684 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3791262 00:20:32.684 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:32.684 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:32.684 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3791262 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3791262' 00:20:32.947 killing process with pid 3791262 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3791262 00:20:32.947 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.947 00:20:32.947 Latency(us) 00:20:32.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.947 =================================================================================================================== 00:20:32.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.947 [2024-05-15 15:58:31.260015] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3791262 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3790992 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3790992 ']' 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3790992 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:32.947 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3790992 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3790992' 00:20:33.207 killing process with pid 3790992 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3790992 00:20:33.207 [2024-05-15 15:58:31.522448] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:33.207 [2024-05-15 15:58:31.522486] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3790992 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3793131 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3793131 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3793131 ']' 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:33.207 15:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.467 [2024-05-15 15:58:31.791469] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:33.467 [2024-05-15 15:58:31.791519] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.467 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.467 [2024-05-15 15:58:31.865858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.467 [2024-05-15 15:58:31.938623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.467 [2024-05-15 15:58:31.938662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.467 [2024-05-15 15:58:31.938672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.467 [2024-05-15 15:58:31.938680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.467 [2024-05-15 15:58:31.938687] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.467 [2024-05-15 15:58:31.938713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.CrzWZyOINc 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CrzWZyOINc 00:20:34.082 15:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:34.341 [2024-05-15 15:58:32.774255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.341 15:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.599 15:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.599 [2024-05-15 15:58:33.107066] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:34.599 [2024-05-15 15:58:33.107120] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.599 [2024-05-15 15:58:33.107348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.599 15:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.858 malloc0 00:20:34.858 15:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CrzWZyOINc 00:20:35.116 [2024-05-15 15:58:33.624660] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3793526 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3793526 /var/tmp/bdevperf.sock 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3793526 ']' 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.116 15:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.374 [2024-05-15 15:58:33.687919] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:35.374 [2024-05-15 15:58:33.687971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793526 ] 00:20:35.374 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.374 [2024-05-15 15:58:33.759322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.374 [2024-05-15 15:58:33.829221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.942 15:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:35.942 15:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:35.942 15:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CrzWZyOINc 00:20:36.200 15:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:36.459 [2024-05-15 15:58:34.800905] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.459 nvme0n1 00:20:36.459 15:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.459 Running I/O for 1 seconds... 00:20:37.837 00:20:37.837 Latency(us) 00:20:37.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.837 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:37.837 Verification LBA range: start 0x0 length 0x2000 00:20:37.837 nvme0n1 : 1.05 1497.48 5.85 0.00 0.00 83721.80 5478.81 114085.07 00:20:37.837 =================================================================================================================== 00:20:37.837 Total : 1497.48 5.85 0.00 0.00 83721.80 5478.81 114085.07 00:20:37.837 0 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3793526 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3793526 ']' 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3793526 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3793526 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3793526' 00:20:37.837 killing process with pid 3793526 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3793526 00:20:37.837 Received shutdown signal, test time was about 1.000000 seconds 00:20:37.837 00:20:37.837 Latency(us) 00:20:37.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.837 =================================================================================================================== 00:20:37.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3793526 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3793131 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3793131 ']' 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3793131 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.837 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3793131 00:20:37.838 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:37.838 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:37.838 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3793131' 00:20:37.838 killing process with pid 3793131 00:20:37.838 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3793131 00:20:37.838 [2024-05-15 15:58:36.338495] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:37.838 [2024-05-15 15:58:36.338540] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:37.838 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3793131 00:20:38.100 15:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:38.100 15:58:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.100 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:38.100 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.100 15:58:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3793979 00:20:38.100 15:58:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3793979 00:20:38.100 15:58:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:38.101 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3793979 ']' 00:20:38.101 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.101 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:38.101 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.101 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:38.101 15:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.101 [2024-05-15 15:58:36.603255] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:38.101 [2024-05-15 15:58:36.603306] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.101 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.360 [2024-05-15 15:58:36.678098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.360 [2024-05-15 15:58:36.748454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.360 [2024-05-15 15:58:36.748496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.360 [2024-05-15 15:58:36.748505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.360 [2024-05-15 15:58:36.748513] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.360 [2024-05-15 15:58:36.748520] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.360 [2024-05-15 15:58:36.748540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.928 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.928 [2024-05-15 15:58:37.441815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.928 malloc0 00:20:38.928 [2024-05-15 15:58:37.470121] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:38.928 [2024-05-15 15:58:37.470172] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.928 [2024-05-15 15:58:37.470381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3794253 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3794253 /var/tmp/bdevperf.sock 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3794253 ']' 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:39.187 15:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.187 [2024-05-15 15:58:37.543248] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:39.187 [2024-05-15 15:58:37.543294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794253 ] 00:20:39.187 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.187 [2024-05-15 15:58:37.611287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.187 [2024-05-15 15:58:37.679952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.125 15:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:40.125 15:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:40.125 15:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CrzWZyOINc 00:20:40.125 15:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:40.125 [2024-05-15 15:58:38.643183] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.385 nvme0n1 00:20:40.385 15:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.385 Running I/O for 1 seconds... 00:20:41.765 00:20:41.765 Latency(us) 00:20:41.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.765 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.765 Verification LBA range: start 0x0 length 0x2000 00:20:41.765 nvme0n1 : 1.07 1356.46 5.30 0.00 0.00 92080.01 5452.60 129184.56 00:20:41.765 =================================================================================================================== 00:20:41.765 Total : 1356.46 5.30 0.00 0.00 92080.01 5452.60 129184.56 00:20:41.765 0 00:20:41.765 15:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:41.765 15:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.765 15:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.765 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.765 15:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:41.765 "subsystems": [ 00:20:41.765 { 00:20:41.765 "subsystem": "keyring", 00:20:41.765 "config": [ 00:20:41.765 { 00:20:41.765 "method": "keyring_file_add_key", 00:20:41.765 "params": { 00:20:41.765 "name": "key0", 00:20:41.765 "path": "/tmp/tmp.CrzWZyOINc" 00:20:41.765 } 00:20:41.765 } 00:20:41.765 ] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "iobuf", 00:20:41.765 "config": [ 00:20:41.765 { 00:20:41.765 "method": "iobuf_set_options", 00:20:41.765 "params": { 00:20:41.765 "small_pool_count": 8192, 00:20:41.765 "large_pool_count": 1024, 00:20:41.765 "small_bufsize": 8192, 00:20:41.765 "large_bufsize": 135168 00:20:41.765 } 00:20:41.765 } 00:20:41.765 ] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "sock", 00:20:41.765 "config": [ 00:20:41.765 { 00:20:41.765 "method": "sock_impl_set_options", 00:20:41.765 "params": { 00:20:41.765 "impl_name": "posix", 00:20:41.765 "recv_buf_size": 2097152, 00:20:41.765 "send_buf_size": 2097152, 00:20:41.765 "enable_recv_pipe": true, 00:20:41.765 "enable_quickack": false, 00:20:41.765 "enable_placement_id": 0, 00:20:41.765 "enable_zerocopy_send_server": true, 00:20:41.765 "enable_zerocopy_send_client": false, 00:20:41.765 "zerocopy_threshold": 0, 00:20:41.765 "tls_version": 0, 00:20:41.765 "enable_ktls": false 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "sock_impl_set_options", 00:20:41.765 "params": { 00:20:41.765 "impl_name": "ssl", 00:20:41.765 "recv_buf_size": 4096, 00:20:41.765 "send_buf_size": 4096, 00:20:41.765 "enable_recv_pipe": true, 00:20:41.765 "enable_quickack": false, 00:20:41.765 "enable_placement_id": 0, 00:20:41.765 "enable_zerocopy_send_server": true, 00:20:41.765 "enable_zerocopy_send_client": false, 00:20:41.765 "zerocopy_threshold": 0, 00:20:41.765 "tls_version": 0, 00:20:41.765 "enable_ktls": false 00:20:41.765 } 00:20:41.765 } 00:20:41.765 ] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "vmd", 00:20:41.765 "config": [] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "accel", 00:20:41.765 "config": [ 00:20:41.765 { 00:20:41.765 "method": "accel_set_options", 00:20:41.765 "params": { 00:20:41.765 "small_cache_size": 128, 00:20:41.765 "large_cache_size": 16, 00:20:41.765 "task_count": 2048, 00:20:41.765 "sequence_count": 2048, 00:20:41.765 "buf_count": 2048 00:20:41.765 } 00:20:41.765 } 00:20:41.765 ] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "bdev", 00:20:41.765 "config": [ 00:20:41.765 { 00:20:41.765 "method": "bdev_set_options", 00:20:41.765 "params": { 00:20:41.765 "bdev_io_pool_size": 65535, 00:20:41.765 "bdev_io_cache_size": 256, 00:20:41.765 "bdev_auto_examine": true, 00:20:41.765 "iobuf_small_cache_size": 128, 00:20:41.765 "iobuf_large_cache_size": 16 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "bdev_raid_set_options", 00:20:41.765 "params": { 00:20:41.765 "process_window_size_kb": 1024 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "bdev_iscsi_set_options", 00:20:41.765 "params": { 00:20:41.765 "timeout_sec": 30 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "bdev_nvme_set_options", 00:20:41.765 "params": { 00:20:41.765 "action_on_timeout": "none", 00:20:41.765 "timeout_us": 0, 00:20:41.765 "timeout_admin_us": 0, 00:20:41.765 "keep_alive_timeout_ms": 10000, 00:20:41.765 "arbitration_burst": 0, 00:20:41.765 "low_priority_weight": 0, 00:20:41.765 "medium_priority_weight": 0, 00:20:41.765 "high_priority_weight": 0, 00:20:41.765 "nvme_adminq_poll_period_us": 10000, 00:20:41.765 "nvme_ioq_poll_period_us": 0, 00:20:41.765 "io_queue_requests": 0, 00:20:41.765 "delay_cmd_submit": true, 00:20:41.765 "transport_retry_count": 4, 00:20:41.765 "bdev_retry_count": 3, 00:20:41.765 "transport_ack_timeout": 0, 00:20:41.765 "ctrlr_loss_timeout_sec": 0, 00:20:41.765 "reconnect_delay_sec": 0, 00:20:41.765 "fast_io_fail_timeout_sec": 0, 00:20:41.765 "disable_auto_failback": false, 00:20:41.765 "generate_uuids": false, 00:20:41.765 "transport_tos": 0, 00:20:41.765 "nvme_error_stat": false, 00:20:41.765 "rdma_srq_size": 0, 00:20:41.765 "io_path_stat": false, 00:20:41.765 "allow_accel_sequence": false, 00:20:41.765 "rdma_max_cq_size": 0, 00:20:41.765 "rdma_cm_event_timeout_ms": 0, 00:20:41.765 "dhchap_digests": [ 00:20:41.765 "sha256", 00:20:41.765 "sha384", 00:20:41.765 "sha512" 00:20:41.765 ], 00:20:41.765 "dhchap_dhgroups": [ 00:20:41.765 "null", 00:20:41.765 "ffdhe2048", 00:20:41.765 "ffdhe3072", 00:20:41.765 "ffdhe4096", 00:20:41.765 "ffdhe6144", 00:20:41.765 "ffdhe8192" 00:20:41.765 ] 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "bdev_nvme_set_hotplug", 00:20:41.765 "params": { 00:20:41.765 "period_us": 100000, 00:20:41.765 "enable": false 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "bdev_malloc_create", 00:20:41.765 "params": { 00:20:41.765 "name": "malloc0", 00:20:41.765 "num_blocks": 8192, 00:20:41.765 "block_size": 4096, 00:20:41.765 "physical_block_size": 4096, 00:20:41.765 "uuid": "002df527-c1bd-4036-b41c-4cf1c801bfef", 00:20:41.765 "optimal_io_boundary": 0 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "bdev_wait_for_examine" 00:20:41.765 } 00:20:41.765 ] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "nbd", 00:20:41.765 "config": [] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "scheduler", 00:20:41.765 "config": [ 00:20:41.765 { 00:20:41.765 "method": "framework_set_scheduler", 00:20:41.765 "params": { 00:20:41.765 "name": "static" 00:20:41.765 } 00:20:41.765 } 00:20:41.765 ] 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "subsystem": "nvmf", 00:20:41.765 "config": [ 00:20:41.765 { 00:20:41.765 "method": "nvmf_set_config", 00:20:41.765 "params": { 00:20:41.765 "discovery_filter": "match_any", 00:20:41.765 "admin_cmd_passthru": { 00:20:41.765 "identify_ctrlr": false 00:20:41.765 } 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "nvmf_set_max_subsystems", 00:20:41.765 "params": { 00:20:41.765 "max_subsystems": 1024 00:20:41.765 } 00:20:41.765 }, 00:20:41.765 { 00:20:41.765 "method": "nvmf_set_crdt", 00:20:41.765 "params": { 00:20:41.765 "crdt1": 0, 00:20:41.765 "crdt2": 0, 00:20:41.766 "crdt3": 0 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "nvmf_create_transport", 00:20:41.766 "params": { 00:20:41.766 "trtype": "TCP", 00:20:41.766 "max_queue_depth": 128, 00:20:41.766 "max_io_qpairs_per_ctrlr": 127, 00:20:41.766 "in_capsule_data_size": 4096, 00:20:41.766 "max_io_size": 131072, 00:20:41.766 "io_unit_size": 131072, 00:20:41.766 "max_aq_depth": 128, 00:20:41.766 "num_shared_buffers": 511, 00:20:41.766 "buf_cache_size": 4294967295, 00:20:41.766 "dif_insert_or_strip": false, 00:20:41.766 "zcopy": false, 00:20:41.766 "c2h_success": false, 00:20:41.766 "sock_priority": 0, 00:20:41.766 "abort_timeout_sec": 1, 00:20:41.766 "ack_timeout": 0, 00:20:41.766 "data_wr_pool_size": 0 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "nvmf_create_subsystem", 00:20:41.766 "params": { 00:20:41.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.766 "allow_any_host": false, 00:20:41.766 "serial_number": "00000000000000000000", 00:20:41.766 "model_number": "SPDK bdev Controller", 00:20:41.766 "max_namespaces": 32, 00:20:41.766 "min_cntlid": 1, 00:20:41.766 "max_cntlid": 65519, 00:20:41.766 "ana_reporting": false 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "nvmf_subsystem_add_host", 00:20:41.766 "params": { 00:20:41.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.766 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.766 "psk": "key0" 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "nvmf_subsystem_add_ns", 00:20:41.766 "params": { 00:20:41.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.766 "namespace": { 00:20:41.766 "nsid": 1, 00:20:41.766 "bdev_name": "malloc0", 00:20:41.766 "nguid": "002DF527C1BD4036B41C4CF1C801BFEF", 00:20:41.766 "uuid": "002df527-c1bd-4036-b41c-4cf1c801bfef", 00:20:41.766 "no_auto_visible": false 00:20:41.766 } 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "nvmf_subsystem_add_listener", 00:20:41.766 "params": { 00:20:41.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.766 "listen_address": { 00:20:41.766 "trtype": "TCP", 00:20:41.766 "adrfam": "IPv4", 00:20:41.766 "traddr": "10.0.0.2", 00:20:41.766 "trsvcid": "4420" 00:20:41.766 }, 00:20:41.766 "secure_channel": true 00:20:41.766 } 00:20:41.766 } 00:20:41.766 ] 00:20:41.766 } 00:20:41.766 ] 00:20:41.766 }' 00:20:41.766 15:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:41.766 15:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:41.766 "subsystems": [ 00:20:41.766 { 00:20:41.766 "subsystem": "keyring", 00:20:41.766 "config": [ 00:20:41.766 { 00:20:41.766 "method": "keyring_file_add_key", 00:20:41.766 "params": { 00:20:41.766 "name": "key0", 00:20:41.766 "path": "/tmp/tmp.CrzWZyOINc" 00:20:41.766 } 00:20:41.766 } 00:20:41.766 ] 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "subsystem": "iobuf", 00:20:41.766 "config": [ 00:20:41.766 { 00:20:41.766 "method": "iobuf_set_options", 00:20:41.766 "params": { 00:20:41.766 "small_pool_count": 8192, 00:20:41.766 "large_pool_count": 1024, 00:20:41.766 "small_bufsize": 8192, 00:20:41.766 "large_bufsize": 135168 00:20:41.766 } 00:20:41.766 } 00:20:41.766 ] 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "subsystem": "sock", 00:20:41.766 "config": [ 00:20:41.766 { 00:20:41.766 "method": "sock_impl_set_options", 00:20:41.766 "params": { 00:20:41.766 "impl_name": "posix", 00:20:41.766 "recv_buf_size": 2097152, 00:20:41.766 "send_buf_size": 2097152, 00:20:41.766 "enable_recv_pipe": true, 00:20:41.766 "enable_quickack": false, 00:20:41.766 "enable_placement_id": 0, 00:20:41.766 "enable_zerocopy_send_server": true, 00:20:41.766 "enable_zerocopy_send_client": false, 00:20:41.766 "zerocopy_threshold": 0, 00:20:41.766 "tls_version": 0, 00:20:41.766 "enable_ktls": false 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "sock_impl_set_options", 00:20:41.766 "params": { 00:20:41.766 "impl_name": "ssl", 00:20:41.766 "recv_buf_size": 4096, 00:20:41.766 "send_buf_size": 4096, 00:20:41.766 "enable_recv_pipe": true, 00:20:41.766 "enable_quickack": false, 00:20:41.766 "enable_placement_id": 0, 00:20:41.766 "enable_zerocopy_send_server": true, 00:20:41.766 "enable_zerocopy_send_client": false, 00:20:41.766 "zerocopy_threshold": 0, 00:20:41.766 "tls_version": 0, 00:20:41.766 "enable_ktls": false 00:20:41.766 } 00:20:41.766 } 00:20:41.766 ] 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "subsystem": "vmd", 00:20:41.766 "config": [] 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "subsystem": "accel", 00:20:41.766 "config": [ 00:20:41.766 { 00:20:41.766 "method": "accel_set_options", 00:20:41.766 "params": { 00:20:41.766 "small_cache_size": 128, 00:20:41.766 "large_cache_size": 16, 00:20:41.766 "task_count": 2048, 00:20:41.766 "sequence_count": 2048, 00:20:41.766 "buf_count": 2048 00:20:41.766 } 00:20:41.766 } 00:20:41.766 ] 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "subsystem": "bdev", 00:20:41.766 "config": [ 00:20:41.766 { 00:20:41.766 "method": "bdev_set_options", 00:20:41.766 "params": { 00:20:41.766 "bdev_io_pool_size": 65535, 00:20:41.766 "bdev_io_cache_size": 256, 00:20:41.766 "bdev_auto_examine": true, 00:20:41.766 "iobuf_small_cache_size": 128, 00:20:41.766 "iobuf_large_cache_size": 16 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "bdev_raid_set_options", 00:20:41.766 "params": { 00:20:41.766 "process_window_size_kb": 1024 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "bdev_iscsi_set_options", 00:20:41.766 "params": { 00:20:41.766 "timeout_sec": 30 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "bdev_nvme_set_options", 00:20:41.766 "params": { 00:20:41.766 "action_on_timeout": "none", 00:20:41.766 "timeout_us": 0, 00:20:41.766 "timeout_admin_us": 0, 00:20:41.766 "keep_alive_timeout_ms": 10000, 00:20:41.766 "arbitration_burst": 0, 00:20:41.766 "low_priority_weight": 0, 00:20:41.766 "medium_priority_weight": 0, 00:20:41.766 "high_priority_weight": 0, 00:20:41.766 "nvme_adminq_poll_period_us": 10000, 00:20:41.766 "nvme_ioq_poll_period_us": 0, 00:20:41.766 "io_queue_requests": 512, 00:20:41.766 "delay_cmd_submit": true, 00:20:41.766 "transport_retry_count": 4, 00:20:41.766 "bdev_retry_count": 3, 00:20:41.766 "transport_ack_timeout": 0, 00:20:41.766 "ctrlr_loss_timeout_sec": 0, 00:20:41.766 "reconnect_delay_sec": 0, 00:20:41.766 "fast_io_fail_timeout_sec": 0, 00:20:41.766 "disable_auto_failback": false, 00:20:41.766 "generate_uuids": false, 00:20:41.766 "transport_tos": 0, 00:20:41.766 "nvme_error_stat": false, 00:20:41.766 "rdma_srq_size": 0, 00:20:41.766 "io_path_stat": false, 00:20:41.766 "allow_accel_sequence": false, 00:20:41.766 "rdma_max_cq_size": 0, 00:20:41.766 "rdma_cm_event_timeout_ms": 0, 00:20:41.766 "dhchap_digests": [ 00:20:41.766 "sha256", 00:20:41.766 "sha384", 00:20:41.766 "sha512" 00:20:41.766 ], 00:20:41.766 "dhchap_dhgroups": [ 00:20:41.766 "null", 00:20:41.766 "ffdhe2048", 00:20:41.766 "ffdhe3072", 00:20:41.766 "ffdhe4096", 00:20:41.766 "ffdhe6144", 00:20:41.766 "ffdhe8192" 00:20:41.766 ] 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "bdev_nvme_attach_controller", 00:20:41.766 "params": { 00:20:41.766 "name": "nvme0", 00:20:41.766 "trtype": "TCP", 00:20:41.766 "adrfam": "IPv4", 00:20:41.766 "traddr": "10.0.0.2", 00:20:41.766 "trsvcid": "4420", 00:20:41.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.766 "prchk_reftag": false, 00:20:41.766 "prchk_guard": false, 00:20:41.766 "ctrlr_loss_timeout_sec": 0, 00:20:41.766 "reconnect_delay_sec": 0, 00:20:41.766 "fast_io_fail_timeout_sec": 0, 00:20:41.766 "psk": "key0", 00:20:41.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.766 "hdgst": false, 00:20:41.766 "ddgst": false 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "bdev_nvme_set_hotplug", 00:20:41.766 "params": { 00:20:41.766 "period_us": 100000, 00:20:41.766 "enable": false 00:20:41.766 } 00:20:41.766 }, 00:20:41.766 { 00:20:41.766 "method": "bdev_enable_histogram", 00:20:41.767 "params": { 00:20:41.767 "name": "nvme0n1", 00:20:41.767 "enable": true 00:20:41.767 } 00:20:41.767 }, 00:20:41.767 { 00:20:41.767 "method": "bdev_wait_for_examine" 00:20:41.767 } 00:20:41.767 ] 00:20:41.767 }, 00:20:41.767 { 00:20:41.767 "subsystem": "nbd", 00:20:41.767 "config": [] 00:20:41.767 } 00:20:41.767 ] 00:20:41.767 }' 00:20:41.767 15:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3794253 00:20:41.767 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3794253 ']' 00:20:41.767 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3794253 00:20:41.767 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:41.767 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.767 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3794253 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3794253' 00:20:42.027 killing process with pid 3794253 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3794253 00:20:42.027 Received shutdown signal, test time was about 1.000000 seconds 00:20:42.027 00:20:42.027 Latency(us) 00:20:42.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.027 =================================================================================================================== 00:20:42.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3794253 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3793979 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3793979 ']' 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3793979 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:42.027 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3793979 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3793979' 00:20:42.287 killing process with pid 3793979 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3793979 00:20:42.287 [2024-05-15 15:58:40.616897] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3793979 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:42.287 15:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:42.287 "subsystems": [ 00:20:42.287 { 00:20:42.287 "subsystem": "keyring", 00:20:42.287 "config": [ 00:20:42.287 { 00:20:42.287 "method": "keyring_file_add_key", 00:20:42.287 "params": { 00:20:42.287 "name": "key0", 00:20:42.287 "path": "/tmp/tmp.CrzWZyOINc" 00:20:42.287 } 00:20:42.287 } 00:20:42.287 ] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "iobuf", 00:20:42.287 "config": [ 00:20:42.287 { 00:20:42.287 "method": "iobuf_set_options", 00:20:42.287 "params": { 00:20:42.287 "small_pool_count": 8192, 00:20:42.287 "large_pool_count": 1024, 00:20:42.287 "small_bufsize": 8192, 00:20:42.287 "large_bufsize": 135168 00:20:42.287 } 00:20:42.287 } 00:20:42.287 ] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "sock", 00:20:42.287 "config": [ 00:20:42.287 { 00:20:42.287 "method": "sock_impl_set_options", 00:20:42.287 "params": { 00:20:42.287 "impl_name": "posix", 00:20:42.287 "recv_buf_size": 2097152, 00:20:42.287 "send_buf_size": 2097152, 00:20:42.287 "enable_recv_pipe": true, 00:20:42.287 "enable_quickack": false, 00:20:42.287 "enable_placement_id": 0, 00:20:42.287 "enable_zerocopy_send_server": true, 00:20:42.287 "enable_zerocopy_send_client": false, 00:20:42.287 "zerocopy_threshold": 0, 00:20:42.287 "tls_version": 0, 00:20:42.287 "enable_ktls": false 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "sock_impl_set_options", 00:20:42.287 "params": { 00:20:42.287 "impl_name": "ssl", 00:20:42.287 "recv_buf_size": 4096, 00:20:42.287 "send_buf_size": 4096, 00:20:42.287 "enable_recv_pipe": true, 00:20:42.287 "enable_quickack": false, 00:20:42.287 "enable_placement_id": 0, 00:20:42.287 "enable_zerocopy_send_server": true, 00:20:42.287 "enable_zerocopy_send_client": false, 00:20:42.287 "zerocopy_threshold": 0, 00:20:42.287 "tls_version": 0, 00:20:42.287 "enable_ktls": false 00:20:42.287 } 00:20:42.287 } 00:20:42.287 ] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "vmd", 00:20:42.287 "config": [] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "accel", 00:20:42.287 "config": [ 00:20:42.287 { 00:20:42.287 "method": "accel_set_options", 00:20:42.287 "params": { 00:20:42.287 "small_cache_size": 128, 00:20:42.287 "large_cache_size": 16, 00:20:42.287 "task_count": 2048, 00:20:42.287 "sequence_count": 2048, 00:20:42.287 "buf_count": 2048 00:20:42.287 } 00:20:42.287 } 00:20:42.287 ] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "bdev", 00:20:42.287 "config": [ 00:20:42.287 { 00:20:42.287 "method": "bdev_set_options", 00:20:42.287 "params": { 00:20:42.287 "bdev_io_pool_size": 65535, 00:20:42.287 "bdev_io_cache_size": 256, 00:20:42.287 "bdev_auto_examine": true, 00:20:42.287 "iobuf_small_cache_size": 128, 00:20:42.287 "iobuf_large_cache_size": 16 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "bdev_raid_set_options", 00:20:42.287 "params": { 00:20:42.287 "process_window_size_kb": 1024 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "bdev_iscsi_set_options", 00:20:42.287 "params": { 00:20:42.287 "timeout_sec": 30 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "bdev_nvme_set_options", 00:20:42.287 "params": { 00:20:42.287 "action_on_timeout": "none", 00:20:42.287 "timeout_us": 0, 00:20:42.287 "timeout_admin_us": 0, 00:20:42.287 "keep_alive_timeout_ms": 10000, 00:20:42.287 "arbitration_burst": 0, 00:20:42.287 "low_priority_weight": 0, 00:20:42.287 "medium_priority_weight": 0, 00:20:42.287 "high_priority_weight": 0, 00:20:42.287 "nvme_adminq_poll_period_us": 10000, 00:20:42.287 "nvme_ioq_poll_period_us": 0, 00:20:42.287 "io_queue_requests": 0, 00:20:42.287 "delay_cmd_submit": true, 00:20:42.287 "transport_retry_count": 4, 00:20:42.287 "bdev_retry_count": 3, 00:20:42.287 "transport_ack_timeout": 0, 00:20:42.287 "ctrlr_loss_timeout_sec": 0, 00:20:42.287 "reconnect_delay_sec": 0, 00:20:42.287 "fast_io_fail_timeout_sec": 0, 00:20:42.287 "disable_auto_failback": false, 00:20:42.287 "generate_uuids": false, 00:20:42.287 "transport_tos": 0, 00:20:42.287 "nvme_error_stat": false, 00:20:42.287 "rdma_srq_size": 0, 00:20:42.287 "io_path_stat": false, 00:20:42.287 "allow_accel_sequence": false, 00:20:42.287 "rdma_max_cq_size": 0, 00:20:42.287 "rdma_cm_event_timeout_ms": 0, 00:20:42.287 "dhchap_digests": [ 00:20:42.287 "sha256", 00:20:42.287 "sha384", 00:20:42.287 "sha512" 00:20:42.287 ], 00:20:42.287 "dhchap_dhgroups": [ 00:20:42.287 "null", 00:20:42.287 "ffdhe2048", 00:20:42.287 "ffdhe3072", 00:20:42.287 "ffdhe4096", 00:20:42.287 "ffdhe6144", 00:20:42.287 "ffdhe8192" 00:20:42.287 ] 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "bdev_nvme_set_hotplug", 00:20:42.287 "params": { 00:20:42.287 "period_us": 100000, 00:20:42.287 "enable": false 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "bdev_malloc_create", 00:20:42.287 "params": { 00:20:42.287 "name": "malloc0", 00:20:42.287 "num_blocks": 8192, 00:20:42.287 "block_size": 4096, 00:20:42.287 "physical_block_size": 4096, 00:20:42.287 "uuid": "002df527-c1bd-4036-b41c-4cf1c801bfef", 00:20:42.287 "optimal_io_boundary": 0 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "bdev_wait_for_examine" 00:20:42.287 } 00:20:42.287 ] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "nbd", 00:20:42.287 "config": [] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "scheduler", 00:20:42.287 "config": [ 00:20:42.287 { 00:20:42.287 "method": "framework_set_scheduler", 00:20:42.287 "params": { 00:20:42.287 "name": "static" 00:20:42.287 } 00:20:42.287 } 00:20:42.287 ] 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "subsystem": "nvmf", 00:20:42.287 "config": [ 00:20:42.287 { 00:20:42.287 "method": "nvmf_set_config", 00:20:42.287 "params": { 00:20:42.287 "discovery_filter": "match_any", 00:20:42.287 "admin_cmd_passthru": { 00:20:42.287 "identify_ctrlr": false 00:20:42.287 } 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "nvmf_set_max_subsystems", 00:20:42.287 "params": { 00:20:42.287 "max_subsystems": 1024 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "nvmf_set_crdt", 00:20:42.287 "params": { 00:20:42.287 "crdt1": 0, 00:20:42.287 "crdt2": 0, 00:20:42.287 "crdt3": 0 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "nvmf_create_transport", 00:20:42.287 "params": { 00:20:42.287 "trtype": "TCP", 00:20:42.287 "max_queue_depth": 128, 00:20:42.287 "max_io_qpairs_per_ctrlr": 127, 00:20:42.287 "in_capsule_data_size": 4096, 00:20:42.287 "max_io_size": 131072, 00:20:42.287 "io_unit_size": 131072, 00:20:42.287 "max_aq_depth": 128, 00:20:42.287 "num_shared_buffers": 511, 00:20:42.287 "buf_cache_size": 4294967295, 00:20:42.287 "dif_insert_or_strip": false, 00:20:42.287 "zcopy": false, 00:20:42.287 "c2h_success": false, 00:20:42.287 "sock_priority": 0, 00:20:42.287 "abort_timeout_sec": 1, 00:20:42.287 "ack_timeout": 0, 00:20:42.287 "data_wr_pool_size": 0 00:20:42.287 } 00:20:42.287 }, 00:20:42.287 { 00:20:42.287 "method": "nvmf_create_subsystem", 00:20:42.287 "params": { 00:20:42.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.287 "allow_any_host": false, 00:20:42.287 "serial_number": "00000000000000000000", 00:20:42.287 "model_number": "SPDK bdev Controller", 00:20:42.287 "max_namespaces": 32, 00:20:42.287 "min_cntlid": 1, 00:20:42.287 "max_cntlid": 65519, 00:20:42.287 "ana_reporting": false 00:20:42.287 } 00:20:42.288 }, 00:20:42.288 { 00:20:42.288 "method": "nvmf_subsystem_add_host", 00:20:42.288 "params": { 00:20:42.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.288 "host": "nqn.2016-06.io.spdk:host1", 00:20:42.288 "psk": "key0" 00:20:42.288 } 00:20:42.288 }, 00:20:42.288 { 00:20:42.288 "method": "nvmf_subsystem_add_ns", 00:20:42.288 "params": { 00:20:42.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.288 "namespace": { 00:20:42.288 "nsid": 1, 00:20:42.288 "bdev_name": "malloc0", 00:20:42.288 "nguid": "002DF527C1BD4036B41C4CF1C801BFEF", 00:20:42.288 "uuid": "002df527-c1bd-4036-b41c-4cf1c801bfef", 00:20:42.288 "no_auto_visible": false 00:20:42.288 } 00:20:42.288 } 00:20:42.288 }, 00:20:42.288 { 00:20:42.288 "method": "nvmf_subsystem_add_listener", 00:20:42.288 "params": { 00:20:42.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.288 "listen_address": { 00:20:42.288 "trtype": "TCP", 00:20:42.288 "adrfam": "IPv4", 00:20:42.288 "traddr": "10.0.0.2", 00:20:42.288 "trsvcid": "4420" 00:20:42.288 }, 00:20:42.288 "secure_channel": true 00:20:42.288 } 00:20:42.288 } 00:20:42.288 ] 00:20:42.288 } 00:20:42.288 ] 00:20:42.288 }' 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3794805 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3794805 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3794805 ']' 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:42.288 15:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.550 [2024-05-15 15:58:40.879696] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:42.550 [2024-05-15 15:58:40.879743] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.550 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.550 [2024-05-15 15:58:40.952399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.550 [2024-05-15 15:58:41.021751] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.550 [2024-05-15 15:58:41.021789] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.550 [2024-05-15 15:58:41.021799] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.550 [2024-05-15 15:58:41.021808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.550 [2024-05-15 15:58:41.021817] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.551 [2024-05-15 15:58:41.021881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.810 [2024-05-15 15:58:41.224121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.810 [2024-05-15 15:58:41.256136] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:42.810 [2024-05-15 15:58:41.256179] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.810 [2024-05-15 15:58:41.270554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3795040 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3795040 /var/tmp/bdevperf.sock 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3795040 ']' 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:43.380 "subsystems": [ 00:20:43.380 { 00:20:43.380 "subsystem": "keyring", 00:20:43.380 "config": [ 00:20:43.380 { 00:20:43.380 "method": "keyring_file_add_key", 00:20:43.380 "params": { 00:20:43.380 "name": "key0", 00:20:43.380 "path": "/tmp/tmp.CrzWZyOINc" 00:20:43.380 } 00:20:43.380 } 00:20:43.380 ] 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "subsystem": "iobuf", 00:20:43.380 "config": [ 00:20:43.380 { 00:20:43.380 "method": "iobuf_set_options", 00:20:43.380 "params": { 00:20:43.380 "small_pool_count": 8192, 00:20:43.380 "large_pool_count": 1024, 00:20:43.380 "small_bufsize": 8192, 00:20:43.380 "large_bufsize": 135168 00:20:43.380 } 00:20:43.380 } 00:20:43.380 ] 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "subsystem": "sock", 00:20:43.380 "config": [ 00:20:43.380 { 00:20:43.380 "method": "sock_impl_set_options", 00:20:43.380 "params": { 00:20:43.380 "impl_name": "posix", 00:20:43.380 "recv_buf_size": 2097152, 00:20:43.380 "send_buf_size": 2097152, 00:20:43.380 "enable_recv_pipe": true, 00:20:43.380 "enable_quickack": false, 00:20:43.380 "enable_placement_id": 0, 00:20:43.380 "enable_zerocopy_send_server": true, 00:20:43.380 "enable_zerocopy_send_client": false, 00:20:43.380 "zerocopy_threshold": 0, 00:20:43.380 "tls_version": 0, 00:20:43.380 "enable_ktls": false 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "sock_impl_set_options", 00:20:43.380 "params": { 00:20:43.380 "impl_name": "ssl", 00:20:43.380 "recv_buf_size": 4096, 00:20:43.380 "send_buf_size": 4096, 00:20:43.380 "enable_recv_pipe": true, 00:20:43.380 "enable_quickack": false, 00:20:43.380 "enable_placement_id": 0, 00:20:43.380 "enable_zerocopy_send_server": true, 00:20:43.380 "enable_zerocopy_send_client": false, 00:20:43.380 "zerocopy_threshold": 0, 00:20:43.380 "tls_version": 0, 00:20:43.380 "enable_ktls": false 00:20:43.380 } 00:20:43.380 } 00:20:43.380 ] 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "subsystem": "vmd", 00:20:43.380 "config": [] 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "subsystem": "accel", 00:20:43.380 "config": [ 00:20:43.380 { 00:20:43.380 "method": "accel_set_options", 00:20:43.380 "params": { 00:20:43.380 "small_cache_size": 128, 00:20:43.380 "large_cache_size": 16, 00:20:43.380 "task_count": 2048, 00:20:43.380 "sequence_count": 2048, 00:20:43.380 "buf_count": 2048 00:20:43.380 } 00:20:43.380 } 00:20:43.380 ] 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "subsystem": "bdev", 00:20:43.380 "config": [ 00:20:43.380 { 00:20:43.380 "method": "bdev_set_options", 00:20:43.380 "params": { 00:20:43.380 "bdev_io_pool_size": 65535, 00:20:43.380 "bdev_io_cache_size": 256, 00:20:43.380 "bdev_auto_examine": true, 00:20:43.380 "iobuf_small_cache_size": 128, 00:20:43.380 "iobuf_large_cache_size": 16 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "bdev_raid_set_options", 00:20:43.380 "params": { 00:20:43.380 "process_window_size_kb": 1024 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "bdev_iscsi_set_options", 00:20:43.380 "params": { 00:20:43.380 "timeout_sec": 30 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "bdev_nvme_set_options", 00:20:43.380 "params": { 00:20:43.380 "action_on_timeout": "none", 00:20:43.380 "timeout_us": 0, 00:20:43.380 "timeout_admin_us": 0, 00:20:43.380 "keep_alive_timeout_ms": 10000, 00:20:43.380 "arbitration_burst": 0, 00:20:43.380 "low_priority_weight": 0, 00:20:43.380 "medium_priority_weight": 0, 00:20:43.380 "high_priority_weight": 0, 00:20:43.380 "nvme_adminq_poll_period_us": 10000, 00:20:43.380 "nvme_ioq_poll_period_us": 0, 00:20:43.380 "io_queue_requests": 512, 00:20:43.380 "delay_cmd_submit": true, 00:20:43.380 "transport_retry_count": 4, 00:20:43.380 "bdev_retry_count": 3, 00:20:43.380 "transport_ack_timeout": 0, 00:20:43.380 "ctrlr_loss_timeout_sec": 0, 00:20:43.380 "reconnect_delay_sec": 0, 00:20:43.380 "fast_io_fail_timeout_sec": 0, 00:20:43.380 "disable_auto_failback": false, 00:20:43.380 "generate_uuids": false, 00:20:43.380 "transport_tos": 0, 00:20:43.380 "nvme_error_stat": false, 00:20:43.380 "rdma_srq_size": 0, 00:20:43.380 "io_path_stat": false, 00:20:43.380 "allow_accel_sequence": false, 00:20:43.380 "rdma_max_cq_size": 0, 00:20:43.380 "rdma_cm_event_timeout_ms": 0, 00:20:43.380 "dhchap_digests": [ 00:20:43.380 "sha256", 00:20:43.380 "sha384", 00:20:43.380 "sha512" 00:20:43.380 ], 00:20:43.380 "dhchap_dhgroups": [ 00:20:43.380 "null", 00:20:43.380 "ffdhe2048", 00:20:43.380 "ffdhe3072", 00:20:43.380 15:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.380 "ffdhe4096", 00:20:43.380 "ffdhe6144", 00:20:43.380 "ffdhe8192" 00:20:43.380 ] 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "bdev_nvme_attach_controller", 00:20:43.380 "params": { 00:20:43.380 "name": "nvme0", 00:20:43.380 "trtype": "TCP", 00:20:43.380 "adrfam": "IPv4", 00:20:43.380 "traddr": "10.0.0.2", 00:20:43.380 "trsvcid": "4420", 00:20:43.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.380 "prchk_reftag": false, 00:20:43.380 "prchk_guard": false, 00:20:43.380 "ctrlr_loss_timeout_sec": 0, 00:20:43.380 "reconnect_delay_sec": 0, 00:20:43.380 "fast_io_fail_timeout_sec": 0, 00:20:43.380 "psk": "key0", 00:20:43.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.380 "hdgst": false, 00:20:43.380 "ddgst": false 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "bdev_nvme_set_hotplug", 00:20:43.380 "params": { 00:20:43.380 "period_us": 100000, 00:20:43.380 "enable": false 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "bdev_enable_histogram", 00:20:43.380 "params": { 00:20:43.380 "name": "nvme0n1", 00:20:43.380 "enable": true 00:20:43.380 } 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "method": "bdev_wait_for_examine" 00:20:43.380 } 00:20:43.381 ] 00:20:43.381 }, 00:20:43.381 { 00:20:43.381 "subsystem": "nbd", 00:20:43.381 "config": [] 00:20:43.381 } 00:20:43.381 ] 00:20:43.381 }' 00:20:43.381 [2024-05-15 15:58:41.772030] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:43.381 [2024-05-15 15:58:41.772080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795040 ] 00:20:43.381 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.381 [2024-05-15 15:58:41.841477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.381 [2024-05-15 15:58:41.910489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.640 [2024-05-15 15:58:42.053411] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.209 15:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:44.209 15:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:44.209 15:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:44.209 15:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:44.209 15:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.209 15:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.469 Running I/O for 1 seconds... 00:20:45.412 00:20:45.412 Latency(us) 00:20:45.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.412 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:45.412 Verification LBA range: start 0x0 length 0x2000 00:20:45.412 nvme0n1 : 1.08 1486.17 5.81 0.00 0.00 83437.00 7287.60 109051.90 00:20:45.412 =================================================================================================================== 00:20:45.412 Total : 1486.17 5.81 0.00 0.00 83437.00 7287.60 109051.90 00:20:45.412 0 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:45.412 15:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:45.412 nvmf_trace.0 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3795040 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3795040 ']' 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3795040 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3795040 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3795040' 00:20:45.671 killing process with pid 3795040 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3795040 00:20:45.671 Received shutdown signal, test time was about 1.000000 seconds 00:20:45.671 00:20:45.671 Latency(us) 00:20:45.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.671 =================================================================================================================== 00:20:45.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.671 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3795040 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.931 rmmod nvme_tcp 00:20:45.931 rmmod nvme_fabrics 00:20:45.931 rmmod nvme_keyring 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3794805 ']' 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3794805 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3794805 ']' 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3794805 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3794805 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3794805' 00:20:45.931 killing process with pid 3794805 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3794805 00:20:45.931 [2024-05-15 15:58:44.370124] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:45.931 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3794805 00:20:46.190 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.190 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.190 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.190 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.191 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.191 15:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.191 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.191 15:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.098 15:58:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:48.098 15:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UWj4JUVd22 /tmp/tmp.qu9Ee7Royd /tmp/tmp.CrzWZyOINc 00:20:48.098 00:20:48.098 real 1m26.621s 00:20:48.098 user 2m8.995s 00:20:48.098 sys 0m33.085s 00:20:48.396 15:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:48.396 15:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.396 ************************************ 00:20:48.396 END TEST nvmf_tls 00:20:48.396 ************************************ 00:20:48.396 15:58:46 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.396 15:58:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:48.396 15:58:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:48.396 15:58:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:48.396 ************************************ 00:20:48.396 START TEST nvmf_fips 00:20:48.396 ************************************ 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.396 * Looking for test storage... 00:20:48.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:48.396 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:48.397 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:48.695 15:58:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:48.695 Error setting digest 00:20:48.695 00D2F8BBB27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:48.695 00D2F8BBB27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.695 15:58:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:55.277 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:55.277 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:55.277 Found net devices under 0000:af:00.0: cvl_0_0 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:55.277 Found net devices under 0000:af:00.1: cvl_0_1 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:20:55.277 00:20:55.277 --- 10.0.0.2 ping statistics --- 00:20:55.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.277 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:20:55.277 00:20:55.277 --- 10.0.0.1 ping statistics --- 00:20:55.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.277 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:55.277 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3799084 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3799084 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3799084 ']' 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.278 15:58:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.278 [2024-05-15 15:58:53.504447] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:55.278 [2024-05-15 15:58:53.504495] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.278 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.278 [2024-05-15 15:58:53.576285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.278 [2024-05-15 15:58:53.647891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.278 [2024-05-15 15:58:53.647929] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.278 [2024-05-15 15:58:53.647937] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.278 [2024-05-15 15:58:53.647945] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.278 [2024-05-15 15:58:53.647952] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.278 [2024-05-15 15:58:53.647976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:55.848 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.108 [2024-05-15 15:58:54.474478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.108 [2024-05-15 15:58:54.490463] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:56.108 [2024-05-15 15:58:54.490503] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.108 [2024-05-15 15:58:54.490687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.108 [2024-05-15 15:58:54.518673] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:56.108 malloc0 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3799367 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3799367 /var/tmp/bdevperf.sock 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3799367 ']' 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:56.108 15:58:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.108 [2024-05-15 15:58:54.605426] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:56.109 [2024-05-15 15:58:54.605477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799367 ] 00:20:56.109 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.109 [2024-05-15 15:58:54.670459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.368 [2024-05-15 15:58:54.743065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.938 15:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:56.938 15:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:56.938 15:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:57.199 [2024-05-15 15:58:55.544839] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.199 [2024-05-15 15:58:55.544923] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:57.199 TLSTESTn1 00:20:57.199 15:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.199 Running I/O for 10 seconds... 00:21:09.424 00:21:09.424 Latency(us) 00:21:09.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.424 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:09.424 Verification LBA range: start 0x0 length 0x2000 00:21:09.424 TLSTESTn1 : 10.08 1747.00 6.82 0.00 0.00 73039.83 7025.46 114923.93 00:21:09.424 =================================================================================================================== 00:21:09.424 Total : 1747.00 6.82 0.00 0.00 73039.83 7025.46 114923.93 00:21:09.424 0 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:09.424 nvmf_trace.0 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3799367 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3799367 ']' 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3799367 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3799367 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3799367' 00:21:09.424 killing process with pid 3799367 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3799367 00:21:09.424 Received shutdown signal, test time was about 10.000000 seconds 00:21:09.424 00:21:09.424 Latency(us) 00:21:09.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.424 =================================================================================================================== 00:21:09.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.424 [2024-05-15 15:59:05.983064] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:09.424 15:59:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3799367 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.424 rmmod nvme_tcp 00:21:09.424 rmmod nvme_fabrics 00:21:09.424 rmmod nvme_keyring 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3799084 ']' 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3799084 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3799084 ']' 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3799084 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3799084 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3799084' 00:21:09.424 killing process with pid 3799084 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3799084 00:21:09.424 [2024-05-15 15:59:06.299010] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:09.424 [2024-05-15 15:59:06.299045] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3799084 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.424 15:59:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.364 15:59:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.364 15:59:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:10.364 00:21:10.364 real 0m21.843s 00:21:10.364 user 0m22.257s 00:21:10.364 sys 0m10.420s 00:21:10.364 15:59:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:10.364 15:59:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:10.364 ************************************ 00:21:10.364 END TEST nvmf_fips 00:21:10.364 ************************************ 00:21:10.364 15:59:08 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:10.364 15:59:08 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:10.364 15:59:08 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:10.364 15:59:08 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:10.364 15:59:08 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.364 15:59:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:16.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.941 15:59:14 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:16.942 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:16.942 Found net devices under 0000:af:00.0: cvl_0_0 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:16.942 Found net devices under 0000:af:00.1: cvl_0_1 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:16.942 15:59:14 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:16.942 15:59:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:16.942 15:59:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:16.942 15:59:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.942 ************************************ 00:21:16.942 START TEST nvmf_perf_adq 00:21:16.942 ************************************ 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:16.942 * Looking for test storage... 00:21:16.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.942 15:59:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:23.544 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.544 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:23.545 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:23.545 Found net devices under 0000:af:00.0: cvl_0_0 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:23.545 Found net devices under 0000:af:00.1: cvl_0_1 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:23.545 15:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:23.805 15:59:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:26.343 15:59:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:31.621 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:31.621 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.621 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:31.622 Found net devices under 0000:af:00.0: cvl_0_0 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:31.622 Found net devices under 0000:af:00.1: cvl_0_1 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:21:31.622 00:21:31.622 --- 10.0.0.2 ping statistics --- 00:21:31.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.622 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:31.622 00:21:31.622 --- 10.0.0.1 ping statistics --- 00:21:31.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.622 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3809592 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3809592 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3809592 ']' 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.622 15:59:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.622 [2024-05-15 15:59:29.845580] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:31.622 [2024-05-15 15:59:29.845629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.622 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.622 [2024-05-15 15:59:29.921256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.622 [2024-05-15 15:59:29.992336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.622 [2024-05-15 15:59:29.992379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.622 [2024-05-15 15:59:29.992389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.622 [2024-05-15 15:59:29.992397] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.622 [2024-05-15 15:59:29.992404] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.622 [2024-05-15 15:59:29.992496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.622 [2024-05-15 15:59:29.992591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.622 [2024-05-15 15:59:29.992674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.623 [2024-05-15 15:59:29.992676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.192 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.453 [2024-05-15 15:59:30.846496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.453 Malloc1 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.453 [2024-05-15 15:59:30.896820] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:32.453 [2024-05-15 15:59:30.897087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3809871 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:32.453 15:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:32.453 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.366 15:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:34.366 15:59:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.366 15:59:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.625 15:59:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.625 15:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:34.625 "tick_rate": 2500000000, 00:21:34.625 "poll_groups": [ 00:21:34.625 { 00:21:34.625 "name": "nvmf_tgt_poll_group_000", 00:21:34.625 "admin_qpairs": 1, 00:21:34.625 "io_qpairs": 1, 00:21:34.625 "current_admin_qpairs": 1, 00:21:34.625 "current_io_qpairs": 1, 00:21:34.625 "pending_bdev_io": 0, 00:21:34.625 "completed_nvme_io": 19212, 00:21:34.625 "transports": [ 00:21:34.625 { 00:21:34.625 "trtype": "TCP" 00:21:34.625 } 00:21:34.625 ] 00:21:34.625 }, 00:21:34.625 { 00:21:34.625 "name": "nvmf_tgt_poll_group_001", 00:21:34.625 "admin_qpairs": 0, 00:21:34.625 "io_qpairs": 1, 00:21:34.625 "current_admin_qpairs": 0, 00:21:34.625 "current_io_qpairs": 1, 00:21:34.625 "pending_bdev_io": 0, 00:21:34.625 "completed_nvme_io": 19191, 00:21:34.625 "transports": [ 00:21:34.625 { 00:21:34.625 "trtype": "TCP" 00:21:34.625 } 00:21:34.625 ] 00:21:34.625 }, 00:21:34.625 { 00:21:34.625 "name": "nvmf_tgt_poll_group_002", 00:21:34.625 "admin_qpairs": 0, 00:21:34.625 "io_qpairs": 1, 00:21:34.625 "current_admin_qpairs": 0, 00:21:34.625 "current_io_qpairs": 1, 00:21:34.625 "pending_bdev_io": 0, 00:21:34.625 "completed_nvme_io": 17643, 00:21:34.625 "transports": [ 00:21:34.625 { 00:21:34.625 "trtype": "TCP" 00:21:34.625 } 00:21:34.625 ] 00:21:34.625 }, 00:21:34.625 { 00:21:34.625 "name": "nvmf_tgt_poll_group_003", 00:21:34.625 "admin_qpairs": 0, 00:21:34.625 "io_qpairs": 1, 00:21:34.625 "current_admin_qpairs": 0, 00:21:34.625 "current_io_qpairs": 1, 00:21:34.625 "pending_bdev_io": 0, 00:21:34.625 "completed_nvme_io": 19235, 00:21:34.625 "transports": [ 00:21:34.625 { 00:21:34.625 "trtype": "TCP" 00:21:34.625 } 00:21:34.625 ] 00:21:34.625 } 00:21:34.625 ] 00:21:34.625 }' 00:21:34.626 15:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:34.626 15:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:34.626 15:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:34.626 15:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:34.626 15:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3809871 00:21:42.750 Initializing NVMe Controllers 00:21:42.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:42.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:42.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:42.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:42.750 Initialization complete. Launching workers. 00:21:42.751 ======================================================== 00:21:42.751 Latency(us) 00:21:42.751 Device Information : IOPS MiB/s Average min max 00:21:42.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9690.30 37.85 6625.90 1535.42 48053.54 00:21:42.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9863.90 38.53 6488.06 1699.05 11082.28 00:21:42.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9840.30 38.44 6504.05 1740.79 11270.54 00:21:42.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9836.40 38.42 6506.74 1724.61 10830.34 00:21:42.751 ======================================================== 00:21:42.751 Total : 39230.90 153.25 6530.80 1535.42 48053.54 00:21:42.751 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.751 rmmod nvme_tcp 00:21:42.751 rmmod nvme_fabrics 00:21:42.751 rmmod nvme_keyring 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3809592 ']' 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3809592 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3809592 ']' 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3809592 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3809592 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3809592' 00:21:42.751 killing process with pid 3809592 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3809592 00:21:42.751 [2024-05-15 15:59:41.189313] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:42.751 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3809592 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.010 15:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.605 15:59:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.605 15:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:45.605 15:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:46.543 15:59:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:48.449 15:59:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:53.729 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:53.729 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:53.729 Found net devices under 0000:af:00.0: cvl_0_0 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:53.729 Found net devices under 0000:af:00.1: cvl_0_1 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.729 15:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.729 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.729 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.729 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.729 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.729 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.729 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:53.730 00:21:53.730 --- 10.0.0.2 ping statistics --- 00:21:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.730 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:53.730 00:21:53.730 --- 10.0.0.1 ping statistics --- 00:21:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.730 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:53.730 net.core.busy_poll = 1 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:53.730 net.core.busy_read = 1 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:53.730 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3813918 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3813918 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3813918 ']' 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:53.989 15:59:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.248 [2024-05-15 15:59:52.583882] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:54.248 [2024-05-15 15:59:52.583950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.248 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.248 [2024-05-15 15:59:52.658602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.248 [2024-05-15 15:59:52.734135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.248 [2024-05-15 15:59:52.734172] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.248 [2024-05-15 15:59:52.734183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.248 [2024-05-15 15:59:52.734211] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.248 [2024-05-15 15:59:52.734219] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.248 [2024-05-15 15:59:52.734264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.248 [2024-05-15 15:59:52.734282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.248 [2024-05-15 15:59:52.734388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.248 [2024-05-15 15:59:52.734390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.186 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.187 [2024-05-15 15:59:53.566864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.187 Malloc1 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.187 [2024-05-15 15:59:53.617422] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:55.187 [2024-05-15 15:59:53.617698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3813995 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:55.187 15:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:55.187 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.094 15:59:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:57.094 15:59:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.094 15:59:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.094 15:59:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.094 15:59:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:57.094 "tick_rate": 2500000000, 00:21:57.094 "poll_groups": [ 00:21:57.094 { 00:21:57.094 "name": "nvmf_tgt_poll_group_000", 00:21:57.094 "admin_qpairs": 1, 00:21:57.094 "io_qpairs": 2, 00:21:57.094 "current_admin_qpairs": 1, 00:21:57.094 "current_io_qpairs": 2, 00:21:57.094 "pending_bdev_io": 0, 00:21:57.094 "completed_nvme_io": 28093, 00:21:57.094 "transports": [ 00:21:57.094 { 00:21:57.094 "trtype": "TCP" 00:21:57.094 } 00:21:57.094 ] 00:21:57.094 }, 00:21:57.094 { 00:21:57.094 "name": "nvmf_tgt_poll_group_001", 00:21:57.094 "admin_qpairs": 0, 00:21:57.094 "io_qpairs": 2, 00:21:57.094 "current_admin_qpairs": 0, 00:21:57.094 "current_io_qpairs": 2, 00:21:57.094 "pending_bdev_io": 0, 00:21:57.094 "completed_nvme_io": 28305, 00:21:57.094 "transports": [ 00:21:57.094 { 00:21:57.094 "trtype": "TCP" 00:21:57.094 } 00:21:57.094 ] 00:21:57.094 }, 00:21:57.094 { 00:21:57.094 "name": "nvmf_tgt_poll_group_002", 00:21:57.094 "admin_qpairs": 0, 00:21:57.094 "io_qpairs": 0, 00:21:57.094 "current_admin_qpairs": 0, 00:21:57.094 "current_io_qpairs": 0, 00:21:57.094 "pending_bdev_io": 0, 00:21:57.094 "completed_nvme_io": 0, 00:21:57.094 "transports": [ 00:21:57.094 { 00:21:57.094 "trtype": "TCP" 00:21:57.094 } 00:21:57.094 ] 00:21:57.094 }, 00:21:57.094 { 00:21:57.094 "name": "nvmf_tgt_poll_group_003", 00:21:57.094 "admin_qpairs": 0, 00:21:57.094 "io_qpairs": 0, 00:21:57.094 "current_admin_qpairs": 0, 00:21:57.094 "current_io_qpairs": 0, 00:21:57.094 "pending_bdev_io": 0, 00:21:57.094 "completed_nvme_io": 0, 00:21:57.094 "transports": [ 00:21:57.094 { 00:21:57.094 "trtype": "TCP" 00:21:57.094 } 00:21:57.094 ] 00:21:57.094 } 00:21:57.094 ] 00:21:57.094 }' 00:21:57.353 15:59:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:57.353 15:59:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:57.353 15:59:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:57.353 15:59:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:57.353 15:59:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3813995 00:22:05.477 Initializing NVMe Controllers 00:22:05.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:05.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:05.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:05.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:05.477 Initialization complete. Launching workers. 00:22:05.477 ======================================================== 00:22:05.477 Latency(us) 00:22:05.477 Device Information : IOPS MiB/s Average min max 00:22:05.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7342.45 28.68 8719.74 1786.33 54309.48 00:22:05.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7070.65 27.62 9053.54 1808.49 54672.27 00:22:05.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7612.75 29.74 8407.30 1468.77 53834.65 00:22:05.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6965.85 27.21 9187.11 1804.39 54555.19 00:22:05.477 ======================================================== 00:22:05.478 Total : 28991.69 113.25 8831.40 1468.77 54672.27 00:22:05.478 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:05.478 rmmod nvme_tcp 00:22:05.478 rmmod nvme_fabrics 00:22:05.478 rmmod nvme_keyring 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3813918 ']' 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3813918 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3813918 ']' 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3813918 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3813918 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3813918' 00:22:05.478 killing process with pid 3813918 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3813918 00:22:05.478 [2024-05-15 16:00:03.865600] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:05.478 16:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3813918 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.737 16:00:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.030 16:00:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.030 16:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:09.030 00:22:09.030 real 0m52.713s 00:22:09.030 user 2m46.175s 00:22:09.030 sys 0m14.007s 00:22:09.030 16:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:09.030 16:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.030 ************************************ 00:22:09.030 END TEST nvmf_perf_adq 00:22:09.030 ************************************ 00:22:09.030 16:00:07 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:09.030 16:00:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:09.030 16:00:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:09.030 16:00:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:09.030 ************************************ 00:22:09.030 START TEST nvmf_shutdown 00:22:09.030 ************************************ 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:09.030 * Looking for test storage... 00:22:09.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:09.030 ************************************ 00:22:09.030 START TEST nvmf_shutdown_tc1 00:22:09.030 ************************************ 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.030 16:00:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.708 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:15.709 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:15.709 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:15.709 Found net devices under 0000:af:00.0: cvl_0_0 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:15.709 Found net devices under 0000:af:00.1: cvl_0_1 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.709 16:00:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.709 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.709 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.709 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.709 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.709 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.709 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.709 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:15.967 00:22:15.967 --- 10.0.0.2 ping statistics --- 00:22:15.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.967 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:22:15.967 00:22:15.967 --- 10.0.0.1 ping statistics --- 00:22:15.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.967 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3820222 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3820222 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3820222 ']' 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.967 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.968 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.968 16:00:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.968 [2024-05-15 16:00:14.378050] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:15.968 [2024-05-15 16:00:14.378095] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.968 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.968 [2024-05-15 16:00:14.451466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.968 [2024-05-15 16:00:14.525857] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.968 [2024-05-15 16:00:14.525895] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.968 [2024-05-15 16:00:14.525908] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.968 [2024-05-15 16:00:14.525916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.968 [2024-05-15 16:00:14.525923] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.968 [2024-05-15 16:00:14.526024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.968 [2024-05-15 16:00:14.526111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.968 [2024-05-15 16:00:14.526145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.968 [2024-05-15 16:00:14.526147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:16.914 [2024-05-15 16:00:15.234957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.914 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.915 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:16.915 Malloc1 00:22:16.915 [2024-05-15 16:00:15.349742] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:16.915 [2024-05-15 16:00:15.349997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.915 Malloc2 00:22:16.915 Malloc3 00:22:16.915 Malloc4 00:22:17.173 Malloc5 00:22:17.173 Malloc6 00:22:17.173 Malloc7 00:22:17.173 Malloc8 00:22:17.173 Malloc9 00:22:17.173 Malloc10 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3820528 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3820528 /var/tmp/bdevperf.sock 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3820528 ']' 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.433 { 00:22:17.433 "params": { 00:22:17.433 "name": "Nvme$subsystem", 00:22:17.433 "trtype": "$TEST_TRANSPORT", 00:22:17.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.433 "adrfam": "ipv4", 00:22:17.433 "trsvcid": "$NVMF_PORT", 00:22:17.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.433 "hdgst": ${hdgst:-false}, 00:22:17.433 "ddgst": ${ddgst:-false} 00:22:17.433 }, 00:22:17.433 "method": "bdev_nvme_attach_controller" 00:22:17.433 } 00:22:17.433 EOF 00:22:17.433 )") 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.433 { 00:22:17.433 "params": { 00:22:17.433 "name": "Nvme$subsystem", 00:22:17.433 "trtype": "$TEST_TRANSPORT", 00:22:17.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.433 "adrfam": "ipv4", 00:22:17.433 "trsvcid": "$NVMF_PORT", 00:22:17.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.433 "hdgst": ${hdgst:-false}, 00:22:17.433 "ddgst": ${ddgst:-false} 00:22:17.433 }, 00:22:17.433 "method": "bdev_nvme_attach_controller" 00:22:17.433 } 00:22:17.433 EOF 00:22:17.433 )") 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.433 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.433 { 00:22:17.433 "params": { 00:22:17.433 "name": "Nvme$subsystem", 00:22:17.433 "trtype": "$TEST_TRANSPORT", 00:22:17.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.433 "adrfam": "ipv4", 00:22:17.433 "trsvcid": "$NVMF_PORT", 00:22:17.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.433 "hdgst": ${hdgst:-false}, 00:22:17.433 "ddgst": ${ddgst:-false} 00:22:17.433 }, 00:22:17.433 "method": "bdev_nvme_attach_controller" 00:22:17.433 } 00:22:17.433 EOF 00:22:17.433 )") 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.434 { 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme$subsystem", 00:22:17.434 "trtype": "$TEST_TRANSPORT", 00:22:17.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "$NVMF_PORT", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.434 "hdgst": ${hdgst:-false}, 00:22:17.434 "ddgst": ${ddgst:-false} 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 } 00:22:17.434 EOF 00:22:17.434 )") 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.434 { 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme$subsystem", 00:22:17.434 "trtype": "$TEST_TRANSPORT", 00:22:17.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "$NVMF_PORT", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.434 "hdgst": ${hdgst:-false}, 00:22:17.434 "ddgst": ${ddgst:-false} 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 } 00:22:17.434 EOF 00:22:17.434 )") 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.434 { 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme$subsystem", 00:22:17.434 "trtype": "$TEST_TRANSPORT", 00:22:17.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "$NVMF_PORT", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.434 "hdgst": ${hdgst:-false}, 00:22:17.434 "ddgst": ${ddgst:-false} 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 } 00:22:17.434 EOF 00:22:17.434 )") 00:22:17.434 [2024-05-15 16:00:15.830842] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:17.434 [2024-05-15 16:00:15.830895] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.434 { 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme$subsystem", 00:22:17.434 "trtype": "$TEST_TRANSPORT", 00:22:17.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "$NVMF_PORT", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.434 "hdgst": ${hdgst:-false}, 00:22:17.434 "ddgst": ${ddgst:-false} 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 } 00:22:17.434 EOF 00:22:17.434 )") 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.434 { 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme$subsystem", 00:22:17.434 "trtype": "$TEST_TRANSPORT", 00:22:17.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "$NVMF_PORT", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.434 "hdgst": ${hdgst:-false}, 00:22:17.434 "ddgst": ${ddgst:-false} 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 } 00:22:17.434 EOF 00:22:17.434 )") 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.434 { 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme$subsystem", 00:22:17.434 "trtype": "$TEST_TRANSPORT", 00:22:17.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "$NVMF_PORT", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.434 "hdgst": ${hdgst:-false}, 00:22:17.434 "ddgst": ${ddgst:-false} 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 } 00:22:17.434 EOF 00:22:17.434 )") 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.434 { 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme$subsystem", 00:22:17.434 "trtype": "$TEST_TRANSPORT", 00:22:17.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "$NVMF_PORT", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.434 "hdgst": ${hdgst:-false}, 00:22:17.434 "ddgst": ${ddgst:-false} 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 } 00:22:17.434 EOF 00:22:17.434 )") 00:22:17.434 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:17.434 16:00:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme1", 00:22:17.434 "trtype": "tcp", 00:22:17.434 "traddr": "10.0.0.2", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "4420", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.434 "hdgst": false, 00:22:17.434 "ddgst": false 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 },{ 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme2", 00:22:17.434 "trtype": "tcp", 00:22:17.434 "traddr": "10.0.0.2", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "4420", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:17.434 "hdgst": false, 00:22:17.434 "ddgst": false 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 },{ 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme3", 00:22:17.434 "trtype": "tcp", 00:22:17.434 "traddr": "10.0.0.2", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "4420", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:17.434 "hdgst": false, 00:22:17.434 "ddgst": false 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 },{ 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme4", 00:22:17.434 "trtype": "tcp", 00:22:17.434 "traddr": "10.0.0.2", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "4420", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:17.434 "hdgst": false, 00:22:17.434 "ddgst": false 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 },{ 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme5", 00:22:17.434 "trtype": "tcp", 00:22:17.434 "traddr": "10.0.0.2", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "4420", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:17.434 "hdgst": false, 00:22:17.434 "ddgst": false 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 },{ 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme6", 00:22:17.434 "trtype": "tcp", 00:22:17.434 "traddr": "10.0.0.2", 00:22:17.434 "adrfam": "ipv4", 00:22:17.434 "trsvcid": "4420", 00:22:17.434 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:17.434 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:17.434 "hdgst": false, 00:22:17.434 "ddgst": false 00:22:17.434 }, 00:22:17.434 "method": "bdev_nvme_attach_controller" 00:22:17.434 },{ 00:22:17.434 "params": { 00:22:17.434 "name": "Nvme7", 00:22:17.434 "trtype": "tcp", 00:22:17.435 "traddr": "10.0.0.2", 00:22:17.435 "adrfam": "ipv4", 00:22:17.435 "trsvcid": "4420", 00:22:17.435 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:17.435 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:17.435 "hdgst": false, 00:22:17.435 "ddgst": false 00:22:17.435 }, 00:22:17.435 "method": "bdev_nvme_attach_controller" 00:22:17.435 },{ 00:22:17.435 "params": { 00:22:17.435 "name": "Nvme8", 00:22:17.435 "trtype": "tcp", 00:22:17.435 "traddr": "10.0.0.2", 00:22:17.435 "adrfam": "ipv4", 00:22:17.435 "trsvcid": "4420", 00:22:17.435 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:17.435 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:17.435 "hdgst": false, 00:22:17.435 "ddgst": false 00:22:17.435 }, 00:22:17.435 "method": "bdev_nvme_attach_controller" 00:22:17.435 },{ 00:22:17.435 "params": { 00:22:17.435 "name": "Nvme9", 00:22:17.435 "trtype": "tcp", 00:22:17.435 "traddr": "10.0.0.2", 00:22:17.435 "adrfam": "ipv4", 00:22:17.435 "trsvcid": "4420", 00:22:17.435 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:17.435 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:17.435 "hdgst": false, 00:22:17.435 "ddgst": false 00:22:17.435 }, 00:22:17.435 "method": "bdev_nvme_attach_controller" 00:22:17.435 },{ 00:22:17.435 "params": { 00:22:17.435 "name": "Nvme10", 00:22:17.435 "trtype": "tcp", 00:22:17.435 "traddr": "10.0.0.2", 00:22:17.435 "adrfam": "ipv4", 00:22:17.435 "trsvcid": "4420", 00:22:17.435 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:17.435 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:17.435 "hdgst": false, 00:22:17.435 "ddgst": false 00:22:17.435 }, 00:22:17.435 "method": "bdev_nvme_attach_controller" 00:22:17.435 }' 00:22:17.435 [2024-05-15 16:00:15.903162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.435 [2024-05-15 16:00:15.972396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.811 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3820528 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:18.812 16:00:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:19.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3820528 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3820222 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:19.748 { 00:22:19.748 "params": { 00:22:19.748 "name": "Nvme$subsystem", 00:22:19.748 "trtype": "$TEST_TRANSPORT", 00:22:19.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.748 "adrfam": "ipv4", 00:22:19.748 "trsvcid": "$NVMF_PORT", 00:22:19.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.748 "hdgst": ${hdgst:-false}, 00:22:19.748 "ddgst": ${ddgst:-false} 00:22:19.748 }, 00:22:19.748 "method": "bdev_nvme_attach_controller" 00:22:19.748 } 00:22:19.748 EOF 00:22:19.748 )") 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:19.748 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:19.748 { 00:22:19.748 "params": { 00:22:19.748 "name": "Nvme$subsystem", 00:22:19.748 "trtype": "$TEST_TRANSPORT", 00:22:19.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.748 "adrfam": "ipv4", 00:22:19.748 "trsvcid": "$NVMF_PORT", 00:22:19.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.748 "hdgst": ${hdgst:-false}, 00:22:19.748 "ddgst": ${ddgst:-false} 00:22:19.748 }, 00:22:19.748 "method": "bdev_nvme_attach_controller" 00:22:19.748 } 00:22:19.748 EOF 00:22:19.748 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 [2024-05-15 16:00:18.346646] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:20.008 [2024-05-15 16:00:18.346702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3821078 ] 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.008 { 00:22:20.008 "params": { 00:22:20.008 "name": "Nvme$subsystem", 00:22:20.008 "trtype": "$TEST_TRANSPORT", 00:22:20.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.008 "adrfam": "ipv4", 00:22:20.008 "trsvcid": "$NVMF_PORT", 00:22:20.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.008 "hdgst": ${hdgst:-false}, 00:22:20.008 "ddgst": ${ddgst:-false} 00:22:20.008 }, 00:22:20.008 "method": "bdev_nvme_attach_controller" 00:22:20.008 } 00:22:20.008 EOF 00:22:20.008 )") 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:20.008 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:20.008 16:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:20.008 "params": { 00:22:20.009 "name": "Nvme1", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme2", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme3", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme4", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme5", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme6", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme7", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme8", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme9", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 },{ 00:22:20.009 "params": { 00:22:20.009 "name": "Nvme10", 00:22:20.009 "trtype": "tcp", 00:22:20.009 "traddr": "10.0.0.2", 00:22:20.009 "adrfam": "ipv4", 00:22:20.009 "trsvcid": "4420", 00:22:20.009 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:20.009 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:20.009 "hdgst": false, 00:22:20.009 "ddgst": false 00:22:20.009 }, 00:22:20.009 "method": "bdev_nvme_attach_controller" 00:22:20.009 }' 00:22:20.009 [2024-05-15 16:00:18.419675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.009 [2024-05-15 16:00:18.489513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.915 Running I/O for 1 seconds... 00:22:22.864 00:22:22.864 Latency(us) 00:22:22.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.864 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme1n1 : 1.07 304.44 19.03 0.00 0.00 207840.67 13736.35 205520.90 00:22:22.864 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme2n1 : 1.16 221.25 13.83 0.00 0.00 282337.08 19922.94 265080.01 00:22:22.864 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme3n1 : 1.17 273.98 17.12 0.00 0.00 224228.80 27682.41 221459.25 00:22:22.864 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme4n1 : 1.10 232.08 14.50 0.00 0.00 261410.00 31876.71 218103.81 00:22:22.864 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme5n1 : 1.15 277.83 17.36 0.00 0.00 215778.43 20552.09 201326.59 00:22:22.864 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme6n1 : 1.13 227.21 14.20 0.00 0.00 256212.58 21076.38 251658.24 00:22:22.864 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme7n1 : 1.18 324.44 20.28 0.00 0.00 180195.60 18769.51 213909.50 00:22:22.864 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme8n1 : 1.19 320.24 20.02 0.00 0.00 179416.91 12792.63 204682.04 00:22:22.864 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme9n1 : 1.17 275.21 17.20 0.00 0.00 205737.08 2896.69 213070.64 00:22:22.864 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.864 Verification LBA range: start 0x0 length 0x400 00:22:22.864 Nvme10n1 : 1.19 322.12 20.13 0.00 0.00 173815.60 13316.92 212231.78 00:22:22.864 =================================================================================================================== 00:22:22.864 Total : 2778.79 173.67 0.00 0.00 213374.45 2896.69 265080.01 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.124 rmmod nvme_tcp 00:22:23.124 rmmod nvme_fabrics 00:22:23.124 rmmod nvme_keyring 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3820222 ']' 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3820222 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3820222 ']' 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3820222 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:22:23.124 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:23.125 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3820222 00:22:23.125 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:23.125 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:23.125 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3820222' 00:22:23.125 killing process with pid 3820222 00:22:23.125 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3820222 00:22:23.125 [2024-05-15 16:00:21.584921] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:23.125 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3820222 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.693 16:00:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.600 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.600 00:22:25.600 real 0m16.620s 00:22:25.600 user 0m35.315s 00:22:25.600 sys 0m6.975s 00:22:25.600 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:25.600 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.600 ************************************ 00:22:25.600 END TEST nvmf_shutdown_tc1 00:22:25.600 ************************************ 00:22:25.600 16:00:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:25.600 16:00:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:25.600 16:00:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:25.600 16:00:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.861 ************************************ 00:22:25.861 START TEST nvmf_shutdown_tc2 00:22:25.861 ************************************ 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:25.861 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:25.861 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:25.861 Found net devices under 0000:af:00.0: cvl_0_0 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.861 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:25.861 Found net devices under 0000:af:00.1: cvl_0_1 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.862 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:22:26.121 00:22:26.121 --- 10.0.0.2 ping statistics --- 00:22:26.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.121 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:22:26.121 00:22:26.121 --- 10.0.0.1 ping statistics --- 00:22:26.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.121 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.121 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3822244 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3822244 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3822244 ']' 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.122 16:00:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.122 [2024-05-15 16:00:24.606108] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:26.122 [2024-05-15 16:00:24.606155] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.122 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.122 [2024-05-15 16:00:24.679905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.381 [2024-05-15 16:00:24.753553] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.381 [2024-05-15 16:00:24.753588] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.381 [2024-05-15 16:00:24.753597] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.381 [2024-05-15 16:00:24.753606] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.381 [2024-05-15 16:00:24.753612] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.381 [2024-05-15 16:00:24.753720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.381 [2024-05-15 16:00:24.753827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.381 [2024-05-15 16:00:24.753935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.381 [2024-05-15 16:00:24.753942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.951 [2024-05-15 16:00:25.457972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.951 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.211 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.211 Malloc1 00:22:27.211 [2024-05-15 16:00:25.568677] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:27.211 [2024-05-15 16:00:25.568928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.211 Malloc2 00:22:27.211 Malloc3 00:22:27.211 Malloc4 00:22:27.211 Malloc5 00:22:27.211 Malloc6 00:22:27.470 Malloc7 00:22:27.470 Malloc8 00:22:27.470 Malloc9 00:22:27.470 Malloc10 00:22:27.470 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.470 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:27.470 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.470 16:00:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.470 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3822557 00:22:27.470 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3822557 /var/tmp/bdevperf.sock 00:22:27.470 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3822557 ']' 00:22:27.470 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.470 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.471 { 00:22:27.471 "params": { 00:22:27.471 "name": "Nvme$subsystem", 00:22:27.471 "trtype": "$TEST_TRANSPORT", 00:22:27.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.471 "adrfam": "ipv4", 00:22:27.471 "trsvcid": "$NVMF_PORT", 00:22:27.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.471 "hdgst": ${hdgst:-false}, 00:22:27.471 "ddgst": ${ddgst:-false} 00:22:27.471 }, 00:22:27.471 "method": "bdev_nvme_attach_controller" 00:22:27.471 } 00:22:27.471 EOF 00:22:27.471 )") 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.471 { 00:22:27.471 "params": { 00:22:27.471 "name": "Nvme$subsystem", 00:22:27.471 "trtype": "$TEST_TRANSPORT", 00:22:27.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.471 "adrfam": "ipv4", 00:22:27.471 "trsvcid": "$NVMF_PORT", 00:22:27.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.471 "hdgst": ${hdgst:-false}, 00:22:27.471 "ddgst": ${ddgst:-false} 00:22:27.471 }, 00:22:27.471 "method": "bdev_nvme_attach_controller" 00:22:27.471 } 00:22:27.471 EOF 00:22:27.471 )") 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.471 { 00:22:27.471 "params": { 00:22:27.471 "name": "Nvme$subsystem", 00:22:27.471 "trtype": "$TEST_TRANSPORT", 00:22:27.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.471 "adrfam": "ipv4", 00:22:27.471 "trsvcid": "$NVMF_PORT", 00:22:27.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.471 "hdgst": ${hdgst:-false}, 00:22:27.471 "ddgst": ${ddgst:-false} 00:22:27.471 }, 00:22:27.471 "method": "bdev_nvme_attach_controller" 00:22:27.471 } 00:22:27.471 EOF 00:22:27.471 )") 00:22:27.471 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.731 { 00:22:27.731 "params": { 00:22:27.731 "name": "Nvme$subsystem", 00:22:27.731 "trtype": "$TEST_TRANSPORT", 00:22:27.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.731 "adrfam": "ipv4", 00:22:27.731 "trsvcid": "$NVMF_PORT", 00:22:27.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.731 "hdgst": ${hdgst:-false}, 00:22:27.731 "ddgst": ${ddgst:-false} 00:22:27.731 }, 00:22:27.731 "method": "bdev_nvme_attach_controller" 00:22:27.731 } 00:22:27.731 EOF 00:22:27.731 )") 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.731 { 00:22:27.731 "params": { 00:22:27.731 "name": "Nvme$subsystem", 00:22:27.731 "trtype": "$TEST_TRANSPORT", 00:22:27.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.731 "adrfam": "ipv4", 00:22:27.731 "trsvcid": "$NVMF_PORT", 00:22:27.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.731 "hdgst": ${hdgst:-false}, 00:22:27.731 "ddgst": ${ddgst:-false} 00:22:27.731 }, 00:22:27.731 "method": "bdev_nvme_attach_controller" 00:22:27.731 } 00:22:27.731 EOF 00:22:27.731 )") 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.731 { 00:22:27.731 "params": { 00:22:27.731 "name": "Nvme$subsystem", 00:22:27.731 "trtype": "$TEST_TRANSPORT", 00:22:27.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.731 "adrfam": "ipv4", 00:22:27.731 "trsvcid": "$NVMF_PORT", 00:22:27.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.731 "hdgst": ${hdgst:-false}, 00:22:27.731 "ddgst": ${ddgst:-false} 00:22:27.731 }, 00:22:27.731 "method": "bdev_nvme_attach_controller" 00:22:27.731 } 00:22:27.731 EOF 00:22:27.731 )") 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 [2024-05-15 16:00:26.054375] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:27.731 [2024-05-15 16:00:26.054428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822557 ] 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.731 { 00:22:27.731 "params": { 00:22:27.731 "name": "Nvme$subsystem", 00:22:27.731 "trtype": "$TEST_TRANSPORT", 00:22:27.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.731 "adrfam": "ipv4", 00:22:27.731 "trsvcid": "$NVMF_PORT", 00:22:27.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.731 "hdgst": ${hdgst:-false}, 00:22:27.731 "ddgst": ${ddgst:-false} 00:22:27.731 }, 00:22:27.731 "method": "bdev_nvme_attach_controller" 00:22:27.731 } 00:22:27.731 EOF 00:22:27.731 )") 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.731 { 00:22:27.731 "params": { 00:22:27.731 "name": "Nvme$subsystem", 00:22:27.731 "trtype": "$TEST_TRANSPORT", 00:22:27.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.731 "adrfam": "ipv4", 00:22:27.731 "trsvcid": "$NVMF_PORT", 00:22:27.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.731 "hdgst": ${hdgst:-false}, 00:22:27.731 "ddgst": ${ddgst:-false} 00:22:27.731 }, 00:22:27.731 "method": "bdev_nvme_attach_controller" 00:22:27.731 } 00:22:27.731 EOF 00:22:27.731 )") 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.731 { 00:22:27.731 "params": { 00:22:27.731 "name": "Nvme$subsystem", 00:22:27.731 "trtype": "$TEST_TRANSPORT", 00:22:27.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.731 "adrfam": "ipv4", 00:22:27.731 "trsvcid": "$NVMF_PORT", 00:22:27.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.731 "hdgst": ${hdgst:-false}, 00:22:27.731 "ddgst": ${ddgst:-false} 00:22:27.731 }, 00:22:27.731 "method": "bdev_nvme_attach_controller" 00:22:27.731 } 00:22:27.731 EOF 00:22:27.731 )") 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.731 { 00:22:27.731 "params": { 00:22:27.731 "name": "Nvme$subsystem", 00:22:27.731 "trtype": "$TEST_TRANSPORT", 00:22:27.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.731 "adrfam": "ipv4", 00:22:27.731 "trsvcid": "$NVMF_PORT", 00:22:27.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.731 "hdgst": ${hdgst:-false}, 00:22:27.731 "ddgst": ${ddgst:-false} 00:22:27.731 }, 00:22:27.731 "method": "bdev_nvme_attach_controller" 00:22:27.731 } 00:22:27.731 EOF 00:22:27.731 )") 00:22:27.731 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:27.731 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.732 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:27.732 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:27.732 16:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme1", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme2", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme3", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme4", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme5", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme6", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme7", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme8", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme9", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 },{ 00:22:27.732 "params": { 00:22:27.732 "name": "Nvme10", 00:22:27.732 "trtype": "tcp", 00:22:27.732 "traddr": "10.0.0.2", 00:22:27.732 "adrfam": "ipv4", 00:22:27.732 "trsvcid": "4420", 00:22:27.732 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.732 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.732 "hdgst": false, 00:22:27.732 "ddgst": false 00:22:27.732 }, 00:22:27.732 "method": "bdev_nvme_attach_controller" 00:22:27.732 }' 00:22:27.732 [2024-05-15 16:00:26.127144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.732 [2024-05-15 16:00:26.195859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.707 Running I/O for 10 seconds... 00:22:29.707 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:29.707 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:29.707 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:29.707 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:29.708 16:00:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3822557 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3822557 ']' 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3822557 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:29.708 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3822557 00:22:29.968 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:29.968 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:29.968 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3822557' 00:22:29.968 killing process with pid 3822557 00:22:29.968 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3822557 00:22:29.968 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3822557 00:22:29.968 Received shutdown signal, test time was about 0.618759 seconds 00:22:29.968 00:22:29.968 Latency(us) 00:22:29.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.968 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme1n1 : 0.60 320.86 20.05 0.00 0.00 196475.84 18979.23 204682.04 00:22:29.968 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme2n1 : 0.61 316.98 19.81 0.00 0.00 193980.01 32505.86 188743.68 00:22:29.968 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme3n1 : 0.59 215.66 13.48 0.00 0.00 276874.85 37748.74 223136.97 00:22:29.968 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme4n1 : 0.61 314.02 19.63 0.00 0.00 185913.34 19503.51 205520.90 00:22:29.968 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme5n1 : 0.61 312.79 19.55 0.00 0.00 181749.08 16986.93 200487.73 00:22:29.968 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme6n1 : 0.58 221.93 13.87 0.00 0.00 246430.92 21181.24 216426.09 00:22:29.968 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme7n1 : 0.61 322.71 20.17 0.00 0.00 165420.35 3827.30 188743.68 00:22:29.968 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme8n1 : 0.62 310.64 19.41 0.00 0.00 168220.81 21915.24 208876.34 00:22:29.968 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme9n1 : 0.59 215.34 13.46 0.00 0.00 230131.71 19608.37 238236.47 00:22:29.968 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.968 Verification LBA range: start 0x0 length 0x400 00:22:29.968 Nvme10n1 : 0.58 219.63 13.73 0.00 0.00 219779.89 27472.69 197132.29 00:22:29.968 =================================================================================================================== 00:22:29.968 Total : 2770.55 173.16 0.00 0.00 200750.19 3827.30 238236.47 00:22:30.227 16:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3822244 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.166 rmmod nvme_tcp 00:22:31.166 rmmod nvme_fabrics 00:22:31.166 rmmod nvme_keyring 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3822244 ']' 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3822244 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3822244 ']' 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3822244 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:31.166 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3822244 00:22:31.426 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:31.426 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:31.426 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3822244' 00:22:31.426 killing process with pid 3822244 00:22:31.426 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3822244 00:22:31.426 [2024-05-15 16:00:29.742820] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:31.426 16:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3822244 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.687 16:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.226 00:22:34.226 real 0m8.047s 00:22:34.226 user 0m23.802s 00:22:34.226 sys 0m1.578s 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.226 ************************************ 00:22:34.226 END TEST nvmf_shutdown_tc2 00:22:34.226 ************************************ 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.226 ************************************ 00:22:34.226 START TEST nvmf_shutdown_tc3 00:22:34.226 ************************************ 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.226 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.227 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.227 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:34.227 00:22:34.227 --- 10.0.0.2 ping statistics --- 00:22:34.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.227 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:22:34.227 00:22:34.227 --- 10.0.0.1 ping statistics --- 00:22:34.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.227 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:34.227 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3823747 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3823747 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3823747 ']' 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.228 16:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.228 [2024-05-15 16:00:32.724239] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:34.228 [2024-05-15 16:00:32.724284] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.228 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.490 [2024-05-15 16:00:32.798032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.490 [2024-05-15 16:00:32.872073] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.490 [2024-05-15 16:00:32.872107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.490 [2024-05-15 16:00:32.872116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.490 [2024-05-15 16:00:32.872124] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.490 [2024-05-15 16:00:32.872131] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.490 [2024-05-15 16:00:32.872229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.490 [2024-05-15 16:00:32.872321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.490 [2024-05-15 16:00:32.872427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.490 [2024-05-15 16:00:32.872428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.058 [2024-05-15 16:00:33.582996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.058 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.318 16:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.318 Malloc1 00:22:35.318 [2024-05-15 16:00:33.685419] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:35.318 [2024-05-15 16:00:33.685674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.318 Malloc2 00:22:35.318 Malloc3 00:22:35.318 Malloc4 00:22:35.318 Malloc5 00:22:35.318 Malloc6 00:22:35.578 Malloc7 00:22:35.578 Malloc8 00:22:35.578 Malloc9 00:22:35.578 Malloc10 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3824066 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3824066 /var/tmp/bdevperf.sock 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3824066 ']' 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.578 { 00:22:35.578 "params": { 00:22:35.578 "name": "Nvme$subsystem", 00:22:35.578 "trtype": "$TEST_TRANSPORT", 00:22:35.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.578 "adrfam": "ipv4", 00:22:35.578 "trsvcid": "$NVMF_PORT", 00:22:35.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.578 "hdgst": ${hdgst:-false}, 00:22:35.578 "ddgst": ${ddgst:-false} 00:22:35.578 }, 00:22:35.578 "method": "bdev_nvme_attach_controller" 00:22:35.578 } 00:22:35.578 EOF 00:22:35.578 )") 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.578 { 00:22:35.578 "params": { 00:22:35.578 "name": "Nvme$subsystem", 00:22:35.578 "trtype": "$TEST_TRANSPORT", 00:22:35.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.578 "adrfam": "ipv4", 00:22:35.578 "trsvcid": "$NVMF_PORT", 00:22:35.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.578 "hdgst": ${hdgst:-false}, 00:22:35.578 "ddgst": ${ddgst:-false} 00:22:35.578 }, 00:22:35.578 "method": "bdev_nvme_attach_controller" 00:22:35.578 } 00:22:35.578 EOF 00:22:35.578 )") 00:22:35.578 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.838 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.838 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.838 { 00:22:35.838 "params": { 00:22:35.838 "name": "Nvme$subsystem", 00:22:35.838 "trtype": "$TEST_TRANSPORT", 00:22:35.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.838 "adrfam": "ipv4", 00:22:35.838 "trsvcid": "$NVMF_PORT", 00:22:35.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.838 "hdgst": ${hdgst:-false}, 00:22:35.838 "ddgst": ${ddgst:-false} 00:22:35.838 }, 00:22:35.838 "method": "bdev_nvme_attach_controller" 00:22:35.838 } 00:22:35.838 EOF 00:22:35.839 )") 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.839 { 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme$subsystem", 00:22:35.839 "trtype": "$TEST_TRANSPORT", 00:22:35.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "$NVMF_PORT", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.839 "hdgst": ${hdgst:-false}, 00:22:35.839 "ddgst": ${ddgst:-false} 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 } 00:22:35.839 EOF 00:22:35.839 )") 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.839 { 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme$subsystem", 00:22:35.839 "trtype": "$TEST_TRANSPORT", 00:22:35.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "$NVMF_PORT", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.839 "hdgst": ${hdgst:-false}, 00:22:35.839 "ddgst": ${ddgst:-false} 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 } 00:22:35.839 EOF 00:22:35.839 )") 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.839 { 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme$subsystem", 00:22:35.839 "trtype": "$TEST_TRANSPORT", 00:22:35.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "$NVMF_PORT", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.839 "hdgst": ${hdgst:-false}, 00:22:35.839 "ddgst": ${ddgst:-false} 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 } 00:22:35.839 EOF 00:22:35.839 )") 00:22:35.839 [2024-05-15 16:00:34.171820] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:35.839 [2024-05-15 16:00:34.171875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824066 ] 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.839 { 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme$subsystem", 00:22:35.839 "trtype": "$TEST_TRANSPORT", 00:22:35.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "$NVMF_PORT", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.839 "hdgst": ${hdgst:-false}, 00:22:35.839 "ddgst": ${ddgst:-false} 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 } 00:22:35.839 EOF 00:22:35.839 )") 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.839 { 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme$subsystem", 00:22:35.839 "trtype": "$TEST_TRANSPORT", 00:22:35.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "$NVMF_PORT", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.839 "hdgst": ${hdgst:-false}, 00:22:35.839 "ddgst": ${ddgst:-false} 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 } 00:22:35.839 EOF 00:22:35.839 )") 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.839 { 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme$subsystem", 00:22:35.839 "trtype": "$TEST_TRANSPORT", 00:22:35.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "$NVMF_PORT", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.839 "hdgst": ${hdgst:-false}, 00:22:35.839 "ddgst": ${ddgst:-false} 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 } 00:22:35.839 EOF 00:22:35.839 )") 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.839 { 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme$subsystem", 00:22:35.839 "trtype": "$TEST_TRANSPORT", 00:22:35.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "$NVMF_PORT", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.839 "hdgst": ${hdgst:-false}, 00:22:35.839 "ddgst": ${ddgst:-false} 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 } 00:22:35.839 EOF 00:22:35.839 )") 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:35.839 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:35.839 16:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme1", 00:22:35.839 "trtype": "tcp", 00:22:35.839 "traddr": "10.0.0.2", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "4420", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.839 "hdgst": false, 00:22:35.839 "ddgst": false 00:22:35.839 }, 00:22:35.839 "method": "bdev_nvme_attach_controller" 00:22:35.839 },{ 00:22:35.839 "params": { 00:22:35.839 "name": "Nvme2", 00:22:35.839 "trtype": "tcp", 00:22:35.839 "traddr": "10.0.0.2", 00:22:35.839 "adrfam": "ipv4", 00:22:35.839 "trsvcid": "4420", 00:22:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.839 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.839 "hdgst": false, 00:22:35.839 "ddgst": false 00:22:35.839 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme3", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme4", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme5", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme6", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme7", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme8", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme9", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 },{ 00:22:35.840 "params": { 00:22:35.840 "name": "Nvme10", 00:22:35.840 "trtype": "tcp", 00:22:35.840 "traddr": "10.0.0.2", 00:22:35.840 "adrfam": "ipv4", 00:22:35.840 "trsvcid": "4420", 00:22:35.840 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.840 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.840 "hdgst": false, 00:22:35.840 "ddgst": false 00:22:35.840 }, 00:22:35.840 "method": "bdev_nvme_attach_controller" 00:22:35.840 }' 00:22:35.840 [2024-05-15 16:00:34.244168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.840 [2024-05-15 16:00:34.312784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.220 Running I/O for 10 seconds... 00:22:37.220 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:37.220 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:37.220 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.220 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.220 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:37.480 16:00:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:37.740 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:37.999 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:38.000 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:38.000 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.000 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.000 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.000 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:38.000 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3823747 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3823747 ']' 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3823747 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3823747 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3823747' 00:22:38.276 killing process with pid 3823747 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3823747 00:22:38.276 [2024-05-15 16:00:36.634487] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:38.276 16:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3823747 00:22:38.276 [2024-05-15 16:00:36.634974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.276 [2024-05-15 16:00:36.635149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.635553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d33b0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.636849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ed260 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.637613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3850 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638216] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.277 [2024-05-15 16:00:36.638728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with [2024-05-15 16:00:36.638825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.277 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with [2024-05-15 16:00:36.638838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128the state(5) to be set 00:22:38.277 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with [2024-05-15 16:00:36.638851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.277 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-05-15 16:00:36.638929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.638941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with [2024-05-15 16:00:36.638952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128the state(5) to be set 00:22:38.277 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with [2024-05-15 16:00:36.638963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.277 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.277 [2024-05-15 16:00:36.638973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.277 [2024-05-15 16:00:36.638975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.277 [2024-05-15 16:00:36.638982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.638988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.638992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-05-15 16:00:36.639001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.639012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-05-15 16:00:36.639070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.639081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with [2024-05-15 16:00:36.639148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.278 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12[2024-05-15 16:00:36.639233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.639244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with [2024-05-15 16:00:36.639266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.278 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12[2024-05-15 16:00:36.639324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.639334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 [2024-05-15 16:00:36.639392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 [2024-05-15 16:00:36.639401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12[2024-05-15 16:00:36.639411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.639423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.278 the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.278 [2024-05-15 16:00:36.639435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.639445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3cf0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.639456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.639981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.639991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.640011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.640030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.640050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.640070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.640110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.279 [2024-05-15 16:00:36.640131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.279 [2024-05-15 16:00:36.640144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1650720 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.640413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4190 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.640437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4190 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.640538] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1650720 was disconnected and freed. reset controller. 00:22:38.279 [2024-05-15 16:00:36.641026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4630 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.279 [2024-05-15 16:00:36.641660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.641991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:38.280 [2024-05-15 16:00:36.642064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a83f0 (9): Bad file descriptor 00:22:38.280 [2024-05-15 16:00:36.642108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4ad0 is same with the state(5) to be set 00:22:38.280 [2024-05-15 16:00:36.642176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.280 [2024-05-15 16:00:36.642351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.280 [2024-05-15 16:00:36.642360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.642988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.642997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.643008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.643017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.643028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.643037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.643049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.643059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.643069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.643078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.643089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.643099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.281 [2024-05-15 16:00:36.643109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.281 [2024-05-15 16:00:36.643119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12[2024-05-15 16:00:36.643169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.643180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with [2024-05-15 16:00:36.643196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12the state(5) to be set 00:22:38.282 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with [2024-05-15 16:00:36.643208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.282 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:12[2024-05-15 16:00:36.643266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.643276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-05-15 16:00:36.643375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with [2024-05-15 16:00:36.643386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.282 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 16:00:36.643453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with [2024-05-15 16:00:36.643464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:12the state(5) to be set 00:22:38.282 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with [2024-05-15 16:00:36.643474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.282 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.282 [2024-05-15 16:00:36.643495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with [2024-05-15 16:00:36.643497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:38.282 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.282 [2024-05-15 16:00:36.643506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643561] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1580250 was disconnected and freed. reset controller. 00:22:38.282 [2024-05-15 16:00:36.643570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.282 [2024-05-15 16:00:36.643641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:12[2024-05-15 16:00:36.643676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with [2024-05-15 16:00:36.643724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:1the state(5) to be set 00:22:38.283 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4f70 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.643769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.643981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.643991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.283 [2024-05-15 16:00:36.644224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.283 [2024-05-15 16:00:36.644572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.644590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.644599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.644608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.644616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.644625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.644634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.283 [2024-05-15 16:00:36.644643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.644996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5410 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.645692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecdc0 is same with the state(5) to be set 00:22:38.284 [2024-05-15 16:00:36.656403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.284 [2024-05-15 16:00:36.656422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.284 [2024-05-15 16:00:36.656436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.284 [2024-05-15 16:00:36.656449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.284 [2024-05-15 16:00:36.656465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.284 [2024-05-15 16:00:36.656478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.284 [2024-05-15 16:00:36.656492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.284 [2024-05-15 16:00:36.656505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.284 [2024-05-15 16:00:36.656519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.656981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.656994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.285 [2024-05-15 16:00:36.657199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.285 [2024-05-15 16:00:36.657212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657482] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1515590 was disconnected and freed. reset controller. 00:22:38.286 [2024-05-15 16:00:36.657570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.657983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.657996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.286 [2024-05-15 16:00:36.658231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.286 [2024-05-15 16:00:36.658244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.658977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.658990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.659004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.659016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.659032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.659045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.659060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.659072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.287 [2024-05-15 16:00:36.659088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.287 [2024-05-15 16:00:36.659102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.659324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.659395] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15218f0 was disconnected and freed. reset controller. 00:22:38.288 [2024-05-15 16:00:36.660069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.288 [2024-05-15 16:00:36.660687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.288 [2024-05-15 16:00:36.660701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.660983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.660998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.661384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.661397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.289 [2024-05-15 16:00:36.666611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.289 [2024-05-15 16:00:36.666626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.290 [2024-05-15 16:00:36.666836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.666875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:38.290 [2024-05-15 16:00:36.666941] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x164f210 was disconnected and freed. reset controller. 00:22:38.290 [2024-05-15 16:00:36.667043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158b760 is same with the state(5) to be set 00:22:38.290 [2024-05-15 16:00:36.667207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2250 is same with the state(5) to be set 00:22:38.290 [2024-05-15 16:00:36.667349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555100 is same with the state(5) to be set 00:22:38.290 [2024-05-15 16:00:36.667495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102d610 is same with the state(5) to be set 00:22:38.290 [2024-05-15 16:00:36.667643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.290 [2024-05-15 16:00:36.667740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.290 [2024-05-15 16:00:36.667753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154a7c0 is same with the state(5) to be set 00:22:38.290 [2024-05-15 16:00:36.667790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.667805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.667819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.667832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.667846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.667860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.667874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.667887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.667902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15279f0 is same with the state(5) to be set 00:22:38.291 [2024-05-15 16:00:36.667937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.667952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.667966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.667978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.667992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15490e0 is same with the state(5) to be set 00:22:38.291 [2024-05-15 16:00:36.668074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1542a70 is same with the state(5) to be set 00:22:38.291 [2024-05-15 16:00:36.668218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.291 [2024-05-15 16:00:36.668314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.291 [2024-05-15 16:00:36.668327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6cd0 is same with the state(5) to be set 00:22:38.291 [2024-05-15 16:00:36.669621] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.291 [2024-05-15 16:00:36.673644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:38.291 [2024-05-15 16:00:36.673685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:38.291 [2024-05-15 16:00:36.673701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:38.291 [2024-05-15 16:00:36.673722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15490e0 (9): Bad file descriptor 00:22:38.291 [2024-05-15 16:00:36.673747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1542a70 (9): Bad file descriptor 00:22:38.291 [2024-05-15 16:00:36.673765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2250 (9): Bad file descriptor 00:22:38.291 [2024-05-15 16:00:36.674262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.291 [2024-05-15 16:00:36.674609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.291 [2024-05-15 16:00:36.674627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a83f0 with addr=10.0.0.2, port=4420 00:22:38.291 [2024-05-15 16:00:36.674643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a83f0 is same with the state(5) to be set 00:22:38.291 [2024-05-15 16:00:36.674926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:38.291 [2024-05-15 16:00:36.674961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6cd0 (9): Bad file descriptor 00:22:38.291 [2024-05-15 16:00:36.675010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a83f0 (9): Bad file descriptor 00:22:38.291 [2024-05-15 16:00:36.676513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.291 [2024-05-15 16:00:36.676914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.291 [2024-05-15 16:00:36.676932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2250 with addr=10.0.0.2, port=4420 00:22:38.291 [2024-05-15 16:00:36.676946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2250 is same with the state(5) to be set 00:22:38.291 [2024-05-15 16:00:36.677292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.291 [2024-05-15 16:00:36.677520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.677537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1542a70 with addr=10.0.0.2, port=4420 00:22:38.292 [2024-05-15 16:00:36.677551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1542a70 is same with the state(5) to be set 00:22:38.292 [2024-05-15 16:00:36.677938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.678326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.678344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15490e0 with addr=10.0.0.2, port=4420 00:22:38.292 [2024-05-15 16:00:36.678358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15490e0 is same with the state(5) to be set 00:22:38.292 [2024-05-15 16:00:36.678387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.678400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.678421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:38.292 [2024-05-15 16:00:36.678535] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.292 [2024-05-15 16:00:36.678594] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.292 [2024-05-15 16:00:36.678647] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.292 [2024-05-15 16:00:36.678675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.679062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.679474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.679492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d6cd0 with addr=10.0.0.2, port=4420 00:22:38.292 [2024-05-15 16:00:36.679506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6cd0 is same with the state(5) to be set 00:22:38.292 [2024-05-15 16:00:36.679523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2250 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1542a70 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15490e0 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158b760 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1555100 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102d610 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a7c0 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15279f0 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6cd0 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.679852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.679866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.679879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:38.292 [2024-05-15 16:00:36.679895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.679908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.679920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:38.292 [2024-05-15 16:00:36.679936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.679948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.679961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:38.292 [2024-05-15 16:00:36.680006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:38.292 [2024-05-15 16:00:36.680022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.680034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.680044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.680067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.680080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.680092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:38.292 [2024-05-15 16:00:36.680137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.680516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.680962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.680980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a83f0 with addr=10.0.0.2, port=4420 00:22:38.292 [2024-05-15 16:00:36.680993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a83f0 is same with the state(5) to be set 00:22:38.292 [2024-05-15 16:00:36.681041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a83f0 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.681087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.681102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.681115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:38.292 [2024-05-15 16:00:36.681161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.685137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:38.292 [2024-05-15 16:00:36.685161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:38.292 [2024-05-15 16:00:36.685175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:38.292 [2024-05-15 16:00:36.685624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.686019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.686037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15490e0 with addr=10.0.0.2, port=4420 00:22:38.292 [2024-05-15 16:00:36.686051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15490e0 is same with the state(5) to be set 00:22:38.292 [2024-05-15 16:00:36.686422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.686820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.686838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1542a70 with addr=10.0.0.2, port=4420 00:22:38.292 [2024-05-15 16:00:36.686851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1542a70 is same with the state(5) to be set 00:22:38.292 [2024-05-15 16:00:36.687310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.687705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.292 [2024-05-15 16:00:36.687723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2250 with addr=10.0.0.2, port=4420 00:22:38.292 [2024-05-15 16:00:36.687736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2250 is same with the state(5) to be set 00:22:38.292 [2024-05-15 16:00:36.687783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15490e0 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.687800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1542a70 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.687815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2250 (9): Bad file descriptor 00:22:38.292 [2024-05-15 16:00:36.687875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.687889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.687902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:38.292 [2024-05-15 16:00:36.687919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.687930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.687943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:38.292 [2024-05-15 16:00:36.687958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:38.292 [2024-05-15 16:00:36.687970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:38.292 [2024-05-15 16:00:36.687983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:38.292 [2024-05-15 16:00:36.688028] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.688040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.292 [2024-05-15 16:00:36.688050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.293 [2024-05-15 16:00:36.688491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:38.293 [2024-05-15 16:00:36.689020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.293 [2024-05-15 16:00:36.689240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.293 [2024-05-15 16:00:36.689258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d6cd0 with addr=10.0.0.2, port=4420 00:22:38.293 [2024-05-15 16:00:36.689272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6cd0 is same with the state(5) to be set 00:22:38.293 [2024-05-15 16:00:36.689317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6cd0 (9): Bad file descriptor 00:22:38.293 [2024-05-15 16:00:36.689415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:38.293 [2024-05-15 16:00:36.689431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:38.293 [2024-05-15 16:00:36.689444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:38.293 [2024-05-15 16:00:36.689500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.689971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.689983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.293 [2024-05-15 16:00:36.690358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.293 [2024-05-15 16:00:36.690374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.690974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.690989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.691313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.691327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657190 is same with the state(5) to be set 00:22:38.294 [2024-05-15 16:00:36.692567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.692588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.294 [2024-05-15 16:00:36.692605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.294 [2024-05-15 16:00:36.692618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.692987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.692998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.295 [2024-05-15 16:00:36.693634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.295 [2024-05-15 16:00:36.693648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.693985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.693998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.694185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.694202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15091a0 is same with the state(5) to be set 00:22:38.296 [2024-05-15 16:00:36.695413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.296 [2024-05-15 16:00:36.695700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.296 [2024-05-15 16:00:36.695711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.695984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.695997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.297 [2024-05-15 16:00:36.696723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.297 [2024-05-15 16:00:36.696734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.696976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.696988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.697001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.697018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.697031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1522da0 is same with the state(5) to be set 00:22:38.298 [2024-05-15 16:00:36.698223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.298 [2024-05-15 16:00:36.698954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.298 [2024-05-15 16:00:36.698966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.698978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.698991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.699829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.699842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4d400 is same with the state(5) to be set 00:22:38.299 [2024-05-15 16:00:36.701019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.701053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.701079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.701104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.701129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.701155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.701180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.299 [2024-05-15 16:00:36.701216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.299 [2024-05-15 16:00:36.701228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.701980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.701990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.702000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.702011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.702020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.702031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.702040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.702051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.300 [2024-05-15 16:00:36.702060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.300 [2024-05-15 16:00:36.702071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.301 [2024-05-15 16:00:36.702451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.301 [2024-05-15 16:00:36.702461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff3e30 is same with the state(5) to be set 00:22:38.301 [2024-05-15 16:00:36.704197] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.301 [2024-05-15 16:00:36.704216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.301 [2024-05-15 16:00:36.704230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:38.301 [2024-05-15 16:00:36.704243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:38.301 [2024-05-15 16:00:36.704325] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.301 [2024-05-15 16:00:36.704343] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.301 [2024-05-15 16:00:36.704397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:38.301 task offset: 32640 on job bdev=Nvme10n1 fails 00:22:38.301 00:22:38.301 Latency(us) 00:22:38.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.301 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme1n1 ended in about 0.93 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme1n1 : 0.93 137.11 8.57 68.55 0.00 308483.41 23278.39 278501.79 00:22:38.301 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme2n1 ended in about 0.91 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme2n1 : 0.91 210.84 13.18 70.28 0.00 221837.72 20132.66 229847.86 00:22:38.301 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme3n1 ended in about 0.94 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme3n1 : 0.94 136.69 8.54 68.35 0.00 299450.09 23907.53 281857.23 00:22:38.301 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme4n1 ended in about 0.91 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme4n1 : 0.91 280.74 17.55 70.19 0.00 171651.24 18559.80 228170.14 00:22:38.301 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme5n1 ended in about 0.91 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme5n1 : 0.91 210.28 13.14 70.09 0.00 211168.87 18979.23 208876.34 00:22:38.301 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme6n1 ended in about 0.94 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme6n1 : 0.94 204.43 12.78 68.14 0.00 213988.15 19503.51 212231.78 00:22:38.301 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme7n1 ended in about 0.94 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme7n1 : 0.94 203.82 12.74 67.94 0.00 210951.78 20971.52 211392.92 00:22:38.301 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme8n1 ended in about 0.94 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme8n1 : 0.94 135.51 8.47 67.76 0.00 277227.66 26214.40 266757.73 00:22:38.301 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme9n1 ended in about 0.91 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme9n1 : 0.91 279.96 17.50 69.99 0.00 157194.40 12425.63 203004.31 00:22:38.301 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.301 Job: Nvme10n1 ended in about 0.88 seconds with error 00:22:38.301 Verification LBA range: start 0x0 length 0x400 00:22:38.301 Nvme10n1 : 0.88 217.34 13.58 72.45 0.00 185144.50 3014.66 190421.40 00:22:38.301 =================================================================================================================== 00:22:38.301 Total : 2016.72 126.05 693.73 0.00 217232.71 3014.66 281857.23 00:22:38.301 [2024-05-15 16:00:36.727537] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:38.301 [2024-05-15 16:00:36.727574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:38.301 [2024-05-15 16:00:36.728112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.728558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.728573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15279f0 with addr=10.0.0.2, port=4420 00:22:38.301 [2024-05-15 16:00:36.728586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15279f0 is same with the state(5) to be set 00:22:38.301 [2024-05-15 16:00:36.728826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.729279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.729292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1555100 with addr=10.0.0.2, port=4420 00:22:38.301 [2024-05-15 16:00:36.729303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555100 is same with the state(5) to be set 00:22:38.301 [2024-05-15 16:00:36.729642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.730081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.730094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102d610 with addr=10.0.0.2, port=4420 00:22:38.301 [2024-05-15 16:00:36.730105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102d610 is same with the state(5) to be set 00:22:38.301 [2024-05-15 16:00:36.731232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:38.301 [2024-05-15 16:00:36.731253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:38.301 [2024-05-15 16:00:36.731264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:38.301 [2024-05-15 16:00:36.731275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:38.301 [2024-05-15 16:00:36.731291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:38.301 [2024-05-15 16:00:36.731803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.732033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.301 [2024-05-15 16:00:36.732046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154a7c0 with addr=10.0.0.2, port=4420 00:22:38.302 [2024-05-15 16:00:36.732057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154a7c0 is same with the state(5) to be set 00:22:38.302 [2024-05-15 16:00:36.732369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.732690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.732704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158b760 with addr=10.0.0.2, port=4420 00:22:38.302 [2024-05-15 16:00:36.732714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158b760 is same with the state(5) to be set 00:22:38.302 [2024-05-15 16:00:36.732730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15279f0 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.732745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1555100 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.732757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102d610 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.732793] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.302 [2024-05-15 16:00:36.732807] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.302 [2024-05-15 16:00:36.732821] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:38.302 [2024-05-15 16:00:36.733237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.733568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.733582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a83f0 with addr=10.0.0.2, port=4420 00:22:38.302 [2024-05-15 16:00:36.733592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a83f0 is same with the state(5) to be set 00:22:38.302 [2024-05-15 16:00:36.734019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.734417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.734430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2250 with addr=10.0.0.2, port=4420 00:22:38.302 [2024-05-15 16:00:36.734440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2250 is same with the state(5) to be set 00:22:38.302 [2024-05-15 16:00:36.734893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.735323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.735336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1542a70 with addr=10.0.0.2, port=4420 00:22:38.302 [2024-05-15 16:00:36.735347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1542a70 is same with the state(5) to be set 00:22:38.302 [2024-05-15 16:00:36.735665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.736092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.736105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15490e0 with addr=10.0.0.2, port=4420 00:22:38.302 [2024-05-15 16:00:36.736116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15490e0 is same with the state(5) to be set 00:22:38.302 [2024-05-15 16:00:36.736284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.736713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.302 [2024-05-15 16:00:36.736726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d6cd0 with addr=10.0.0.2, port=4420 00:22:38.302 [2024-05-15 16:00:36.736736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6cd0 is same with the state(5) to be set 00:22:38.302 [2024-05-15 16:00:36.736747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a7c0 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.736758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158b760 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.736770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.736779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.736790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.302 [2024-05-15 16:00:36.736805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.736814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.736823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:38.302 [2024-05-15 16:00:36.736835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.736844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.736853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:38.302 [2024-05-15 16:00:36.736921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.736932] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.736940] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.736949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a83f0 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.736960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2250 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.736971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1542a70 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.736983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15490e0 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.736994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6cd0 (9): Bad file descriptor 00:22:38.302 [2024-05-15 16:00:36.737005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.737015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.737024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:38.302 [2024-05-15 16:00:36.737035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.737043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.737053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:38.302 [2024-05-15 16:00:36.737088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.737098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.737109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.737118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.737127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:38.302 [2024-05-15 16:00:36.737138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.737146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.737155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:38.302 [2024-05-15 16:00:36.737165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.737181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.737196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:38.302 [2024-05-15 16:00:36.737207] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.737215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.737224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:38.302 [2024-05-15 16:00:36.737234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:38.302 [2024-05-15 16:00:36.737243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:38.302 [2024-05-15 16:00:36.737252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:38.302 [2024-05-15 16:00:36.737278] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.737287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.737295] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.737303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.302 [2024-05-15 16:00:36.737311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.562 16:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:38.562 16:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3824066 00:22:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3824066) - No such process 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.942 rmmod nvme_tcp 00:22:39.942 rmmod nvme_fabrics 00:22:39.942 rmmod nvme_keyring 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.942 16:00:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.845 16:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.845 00:22:41.845 real 0m7.941s 00:22:41.845 user 0m19.088s 00:22:41.845 sys 0m1.716s 00:22:41.845 16:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:41.845 16:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:41.845 ************************************ 00:22:41.845 END TEST nvmf_shutdown_tc3 00:22:41.845 ************************************ 00:22:41.845 16:00:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:41.845 00:22:41.845 real 0m33.032s 00:22:41.845 user 1m18.372s 00:22:41.845 sys 0m10.544s 00:22:41.845 16:00:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:41.845 16:00:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:41.845 ************************************ 00:22:41.845 END TEST nvmf_shutdown 00:22:41.845 ************************************ 00:22:41.845 16:00:40 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:41.845 16:00:40 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.845 16:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.845 16:00:40 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:41.845 16:00:40 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:41.845 16:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.845 16:00:40 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:22:41.845 16:00:40 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:41.845 16:00:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:41.845 16:00:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:41.845 16:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.105 ************************************ 00:22:42.105 START TEST nvmf_multicontroller 00:22:42.105 ************************************ 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:42.105 * Looking for test storage... 00:22:42.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.105 16:00:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.106 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.106 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.106 16:00:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.106 16:00:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:48.715 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:48.715 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:48.715 Found net devices under 0000:af:00.0: cvl_0_0 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.715 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:48.715 Found net devices under 0000:af:00.1: cvl_0_1 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.716 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:22:48.976 00:22:48.976 --- 10.0.0.2 ping statistics --- 00:22:48.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.976 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:22:48.976 00:22:48.976 --- 10.0.0.1 ping statistics --- 00:22:48.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.976 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3828368 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3828368 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3828368 ']' 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.976 16:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.976 [2024-05-15 16:00:47.456071] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:48.976 [2024-05-15 16:00:47.456121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.976 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.976 [2024-05-15 16:00:47.525007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:49.236 [2024-05-15 16:00:47.601414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.236 [2024-05-15 16:00:47.601449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.236 [2024-05-15 16:00:47.601459] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.236 [2024-05-15 16:00:47.601468] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.236 [2024-05-15 16:00:47.601475] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.236 [2024-05-15 16:00:47.601574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.236 [2024-05-15 16:00:47.601656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.236 [2024-05-15 16:00:47.601658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.804 [2024-05-15 16:00:48.329062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.804 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.063 Malloc0 00:22:50.063 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.063 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.063 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.063 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.063 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 [2024-05-15 16:00:48.399771] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:50.064 [2024-05-15 16:00:48.400023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 [2024-05-15 16:00:48.407912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 Malloc1 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3828648 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3828648 /var/tmp/bdevperf.sock 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3828648 ']' 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.064 16:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.001 NVMe0n1 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.001 1 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.001 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.001 request: 00:22:51.001 { 00:22:51.001 "name": "NVMe0", 00:22:51.001 "trtype": "tcp", 00:22:51.001 "traddr": "10.0.0.2", 00:22:51.001 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:51.001 "hostaddr": "10.0.0.2", 00:22:51.001 "hostsvcid": "60000", 00:22:51.001 "adrfam": "ipv4", 00:22:51.001 "trsvcid": "4420", 00:22:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.002 "method": "bdev_nvme_attach_controller", 00:22:51.002 "req_id": 1 00:22:51.002 } 00:22:51.002 Got JSON-RPC error response 00:22:51.002 response: 00:22:51.002 { 00:22:51.002 "code": -114, 00:22:51.002 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:51.002 } 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.002 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.261 request: 00:22:51.261 { 00:22:51.261 "name": "NVMe0", 00:22:51.261 "trtype": "tcp", 00:22:51.261 "traddr": "10.0.0.2", 00:22:51.261 "hostaddr": "10.0.0.2", 00:22:51.261 "hostsvcid": "60000", 00:22:51.261 "adrfam": "ipv4", 00:22:51.261 "trsvcid": "4420", 00:22:51.261 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.261 "method": "bdev_nvme_attach_controller", 00:22:51.261 "req_id": 1 00:22:51.261 } 00:22:51.261 Got JSON-RPC error response 00:22:51.261 response: 00:22:51.261 { 00:22:51.261 "code": -114, 00:22:51.261 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:51.261 } 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.261 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.261 request: 00:22:51.261 { 00:22:51.261 "name": "NVMe0", 00:22:51.261 "trtype": "tcp", 00:22:51.261 "traddr": "10.0.0.2", 00:22:51.261 "hostaddr": "10.0.0.2", 00:22:51.261 "hostsvcid": "60000", 00:22:51.261 "adrfam": "ipv4", 00:22:51.261 "trsvcid": "4420", 00:22:51.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.261 "multipath": "disable", 00:22:51.261 "method": "bdev_nvme_attach_controller", 00:22:51.261 "req_id": 1 00:22:51.261 } 00:22:51.261 Got JSON-RPC error response 00:22:51.261 response: 00:22:51.261 { 00:22:51.261 "code": -114, 00:22:51.261 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:51.261 } 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.262 request: 00:22:51.262 { 00:22:51.262 "name": "NVMe0", 00:22:51.262 "trtype": "tcp", 00:22:51.262 "traddr": "10.0.0.2", 00:22:51.262 "hostaddr": "10.0.0.2", 00:22:51.262 "hostsvcid": "60000", 00:22:51.262 "adrfam": "ipv4", 00:22:51.262 "trsvcid": "4420", 00:22:51.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.262 "multipath": "failover", 00:22:51.262 "method": "bdev_nvme_attach_controller", 00:22:51.262 "req_id": 1 00:22:51.262 } 00:22:51.262 Got JSON-RPC error response 00:22:51.262 response: 00:22:51.262 { 00:22:51.262 "code": -114, 00:22:51.262 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:51.262 } 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.262 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.262 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.262 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.521 16:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.521 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:51.521 16:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.458 0 00:22:52.458 16:00:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:52.458 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.458 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.458 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.458 16:00:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3828648 00:22:52.459 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3828648 ']' 00:22:52.459 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3828648 00:22:52.459 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:52.459 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.459 16:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3828648 00:22:52.459 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:52.459 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:52.459 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3828648' 00:22:52.459 killing process with pid 3828648 00:22:52.459 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3828648 00:22:52.459 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3828648 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:22:52.718 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:52.718 [2024-05-15 16:00:48.511353] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:52.718 [2024-05-15 16:00:48.511403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828648 ] 00:22:52.718 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.718 [2024-05-15 16:00:48.581045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.718 [2024-05-15 16:00:48.650729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.718 [2024-05-15 16:00:49.805131] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name b91e526a-9096-489d-af2f-a9de96a64bc8 already exists 00:22:52.718 [2024-05-15 16:00:49.805161] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:b91e526a-9096-489d-af2f-a9de96a64bc8 alias for bdev NVMe1n1 00:22:52.718 [2024-05-15 16:00:49.805173] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:52.718 Running I/O for 1 seconds... 00:22:52.718 00:22:52.718 Latency(us) 00:22:52.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.718 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:52.718 NVMe0n1 : 1.00 24408.94 95.35 0.00 0.00 5232.38 1966.08 20761.80 00:22:52.718 =================================================================================================================== 00:22:52.718 Total : 24408.94 95.35 0.00 0.00 5232.38 1966.08 20761.80 00:22:52.718 Received shutdown signal, test time was about 1.000000 seconds 00:22:52.718 00:22:52.718 Latency(us) 00:22:52.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.718 =================================================================================================================== 00:22:52.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.718 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.718 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.718 rmmod nvme_tcp 00:22:52.978 rmmod nvme_fabrics 00:22:52.978 rmmod nvme_keyring 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3828368 ']' 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3828368 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3828368 ']' 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3828368 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3828368 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3828368' 00:22:52.978 killing process with pid 3828368 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3828368 00:22:52.978 [2024-05-15 16:00:51.370118] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:52.978 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3828368 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.238 16:00:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.144 16:00:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.144 00:22:55.144 real 0m13.232s 00:22:55.144 user 0m16.779s 00:22:55.144 sys 0m6.088s 00:22:55.144 16:00:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:55.144 16:00:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.144 ************************************ 00:22:55.144 END TEST nvmf_multicontroller 00:22:55.144 ************************************ 00:22:55.404 16:00:53 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:55.404 16:00:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:55.404 16:00:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:55.404 16:00:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.404 ************************************ 00:22:55.404 START TEST nvmf_aer 00:22:55.404 ************************************ 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:55.404 * Looking for test storage... 00:22:55.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.404 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.405 16:00:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.979 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:01.980 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:01.980 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:01.980 Found net devices under 0000:af:00.0: cvl_0_0 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:01.980 Found net devices under 0000:af:00.1: cvl_0_1 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:01.980 00:23:01.980 --- 10.0.0.2 ping statistics --- 00:23:01.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.980 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:23:01.980 00:23:01.980 --- 10.0.0.1 ping statistics --- 00:23:01.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.980 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:01.980 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3832850 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3832850 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3832850 ']' 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.239 16:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.239 [2024-05-15 16:01:00.594884] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:02.239 [2024-05-15 16:01:00.594932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.239 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.239 [2024-05-15 16:01:00.669925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.239 [2024-05-15 16:01:00.745646] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.239 [2024-05-15 16:01:00.745683] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.239 [2024-05-15 16:01:00.745692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.239 [2024-05-15 16:01:00.745701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.239 [2024-05-15 16:01:00.745708] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.239 [2024-05-15 16:01:00.745752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.239 [2024-05-15 16:01:00.745849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.239 [2024-05-15 16:01:00.745932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.239 [2024-05-15 16:01:00.745935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 [2024-05-15 16:01:01.452123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 Malloc0 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 [2024-05-15 16:01:01.506693] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:03.176 [2024-05-15 16:01:01.506955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.176 [ 00:23:03.176 { 00:23:03.176 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:03.176 "subtype": "Discovery", 00:23:03.176 "listen_addresses": [], 00:23:03.176 "allow_any_host": true, 00:23:03.176 "hosts": [] 00:23:03.176 }, 00:23:03.176 { 00:23:03.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.176 "subtype": "NVMe", 00:23:03.176 "listen_addresses": [ 00:23:03.176 { 00:23:03.176 "trtype": "TCP", 00:23:03.176 "adrfam": "IPv4", 00:23:03.176 "traddr": "10.0.0.2", 00:23:03.176 "trsvcid": "4420" 00:23:03.176 } 00:23:03.176 ], 00:23:03.176 "allow_any_host": true, 00:23:03.176 "hosts": [], 00:23:03.176 "serial_number": "SPDK00000000000001", 00:23:03.176 "model_number": "SPDK bdev Controller", 00:23:03.176 "max_namespaces": 2, 00:23:03.176 "min_cntlid": 1, 00:23:03.176 "max_cntlid": 65519, 00:23:03.176 "namespaces": [ 00:23:03.176 { 00:23:03.176 "nsid": 1, 00:23:03.176 "bdev_name": "Malloc0", 00:23:03.176 "name": "Malloc0", 00:23:03.176 "nguid": "AFD395A21A884012BCF0ED515B87C46C", 00:23:03.176 "uuid": "afd395a2-1a88-4012-bcf0-ed515b87c46c" 00:23:03.176 } 00:23:03.176 ] 00:23:03.176 } 00:23:03.176 ] 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3832916 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:03.176 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:23:03.176 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 Malloc1 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 [ 00:23:03.436 { 00:23:03.436 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:03.436 "subtype": "Discovery", 00:23:03.436 "listen_addresses": [], 00:23:03.436 "allow_any_host": true, 00:23:03.436 "hosts": [] 00:23:03.436 }, 00:23:03.436 { 00:23:03.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.436 "subtype": "NVMe", 00:23:03.436 "listen_addresses": [ 00:23:03.436 { 00:23:03.436 "trtype": "TCP", 00:23:03.436 "adrfam": "IPv4", 00:23:03.436 "traddr": "10.0.0.2", 00:23:03.436 "trsvcid": "4420" 00:23:03.436 } 00:23:03.436 ], 00:23:03.436 "allow_any_host": true, 00:23:03.436 "hosts": [], 00:23:03.436 "serial_number": "SPDK00000000000001", 00:23:03.436 "model_number": "SPDK bdev Controller", 00:23:03.436 Asynchronous Event Request test 00:23:03.436 Attaching to 10.0.0.2 00:23:03.436 Attached to 10.0.0.2 00:23:03.436 Registering asynchronous event callbacks... 00:23:03.436 Starting namespace attribute notice tests for all controllers... 00:23:03.436 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:03.436 aer_cb - Changed Namespace 00:23:03.436 Cleaning up... 00:23:03.436 "max_namespaces": 2, 00:23:03.436 "min_cntlid": 1, 00:23:03.436 "max_cntlid": 65519, 00:23:03.436 "namespaces": [ 00:23:03.436 { 00:23:03.436 "nsid": 1, 00:23:03.436 "bdev_name": "Malloc0", 00:23:03.436 "name": "Malloc0", 00:23:03.436 "nguid": "AFD395A21A884012BCF0ED515B87C46C", 00:23:03.436 "uuid": "afd395a2-1a88-4012-bcf0-ed515b87c46c" 00:23:03.436 }, 00:23:03.436 { 00:23:03.436 "nsid": 2, 00:23:03.436 "bdev_name": "Malloc1", 00:23:03.436 "name": "Malloc1", 00:23:03.436 "nguid": "7ABCCDC8E9124D62B81A4B32AA5F0004", 00:23:03.436 "uuid": "7abccdc8-e912-4d62-b81a-4b32aa5f0004" 00:23:03.436 } 00:23:03.436 ] 00:23:03.436 } 00:23:03.436 ] 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3832916 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.436 rmmod nvme_tcp 00:23:03.436 rmmod nvme_fabrics 00:23:03.436 rmmod nvme_keyring 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:03.436 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3832850 ']' 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3832850 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3832850 ']' 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3832850 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3832850 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3832850' 00:23:03.437 killing process with pid 3832850 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3832850 00:23:03.437 [2024-05-15 16:01:01.996860] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:03.437 16:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3832850 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.696 16:01:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.235 16:01:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.235 00:23:06.235 real 0m10.495s 00:23:06.235 user 0m7.633s 00:23:06.235 sys 0m5.511s 00:23:06.235 16:01:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:06.235 16:01:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.235 ************************************ 00:23:06.235 END TEST nvmf_aer 00:23:06.235 ************************************ 00:23:06.235 16:01:04 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:06.235 16:01:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:06.235 16:01:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:06.235 16:01:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.235 ************************************ 00:23:06.235 START TEST nvmf_async_init 00:23:06.235 ************************************ 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:06.235 * Looking for test storage... 00:23:06.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f50645992aa94794b480b25ac563a71c 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.235 16:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:12.845 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:12.845 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:12.845 Found net devices under 0000:af:00.0: cvl_0_0 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:12.845 Found net devices under 0000:af:00.1: cvl_0_1 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:12.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:23:12.845 00:23:12.845 --- 10.0.0.2 ping statistics --- 00:23:12.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.845 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:23:12.845 00:23:12.845 --- 10.0.0.1 ping statistics --- 00:23:12.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.845 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3836612 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3836612 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3836612 ']' 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.845 16:01:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:12.846 16:01:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:12.846 [2024-05-15 16:01:10.973329] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:12.846 [2024-05-15 16:01:10.973377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.846 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.846 [2024-05-15 16:01:11.046421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.846 [2024-05-15 16:01:11.120472] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.846 [2024-05-15 16:01:11.120507] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.846 [2024-05-15 16:01:11.120516] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.846 [2024-05-15 16:01:11.120525] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.846 [2024-05-15 16:01:11.120532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.846 [2024-05-15 16:01:11.120552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 [2024-05-15 16:01:11.812075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 null0 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f50645992aa94794b480b25ac563a71c 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 [2024-05-15 16:01:11.852133] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:13.415 [2024-05-15 16:01:11.852319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.415 16:01:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.675 nvme0n1 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.675 [ 00:23:13.675 { 00:23:13.675 "name": "nvme0n1", 00:23:13.675 "aliases": [ 00:23:13.675 "f5064599-2aa9-4794-b480-b25ac563a71c" 00:23:13.675 ], 00:23:13.675 "product_name": "NVMe disk", 00:23:13.675 "block_size": 512, 00:23:13.675 "num_blocks": 2097152, 00:23:13.675 "uuid": "f5064599-2aa9-4794-b480-b25ac563a71c", 00:23:13.675 "assigned_rate_limits": { 00:23:13.675 "rw_ios_per_sec": 0, 00:23:13.675 "rw_mbytes_per_sec": 0, 00:23:13.675 "r_mbytes_per_sec": 0, 00:23:13.675 "w_mbytes_per_sec": 0 00:23:13.675 }, 00:23:13.675 "claimed": false, 00:23:13.675 "zoned": false, 00:23:13.675 "supported_io_types": { 00:23:13.675 "read": true, 00:23:13.675 "write": true, 00:23:13.675 "unmap": false, 00:23:13.675 "write_zeroes": true, 00:23:13.675 "flush": true, 00:23:13.675 "reset": true, 00:23:13.675 "compare": true, 00:23:13.675 "compare_and_write": true, 00:23:13.675 "abort": true, 00:23:13.675 "nvme_admin": true, 00:23:13.675 "nvme_io": true 00:23:13.675 }, 00:23:13.675 "memory_domains": [ 00:23:13.675 { 00:23:13.675 "dma_device_id": "system", 00:23:13.675 "dma_device_type": 1 00:23:13.675 } 00:23:13.675 ], 00:23:13.675 "driver_specific": { 00:23:13.675 "nvme": [ 00:23:13.675 { 00:23:13.675 "trid": { 00:23:13.675 "trtype": "TCP", 00:23:13.675 "adrfam": "IPv4", 00:23:13.675 "traddr": "10.0.0.2", 00:23:13.675 "trsvcid": "4420", 00:23:13.675 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.675 }, 00:23:13.675 "ctrlr_data": { 00:23:13.675 "cntlid": 1, 00:23:13.675 "vendor_id": "0x8086", 00:23:13.675 "model_number": "SPDK bdev Controller", 00:23:13.675 "serial_number": "00000000000000000000", 00:23:13.675 "firmware_revision": "24.05", 00:23:13.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.675 "oacs": { 00:23:13.675 "security": 0, 00:23:13.675 "format": 0, 00:23:13.675 "firmware": 0, 00:23:13.675 "ns_manage": 0 00:23:13.675 }, 00:23:13.675 "multi_ctrlr": true, 00:23:13.675 "ana_reporting": false 00:23:13.675 }, 00:23:13.675 "vs": { 00:23:13.675 "nvme_version": "1.3" 00:23:13.675 }, 00:23:13.675 "ns_data": { 00:23:13.675 "id": 1, 00:23:13.675 "can_share": true 00:23:13.675 } 00:23:13.675 } 00:23:13.675 ], 00:23:13.675 "mp_policy": "active_passive" 00:23:13.675 } 00:23:13.675 } 00:23:13.675 ] 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.675 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.675 [2024-05-15 16:01:12.104851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.676 [2024-05-15 16:01:12.104906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ecf30 (9): Bad file descriptor 00:23:13.676 [2024-05-15 16:01:12.237299] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.936 [ 00:23:13.936 { 00:23:13.936 "name": "nvme0n1", 00:23:13.936 "aliases": [ 00:23:13.936 "f5064599-2aa9-4794-b480-b25ac563a71c" 00:23:13.936 ], 00:23:13.936 "product_name": "NVMe disk", 00:23:13.936 "block_size": 512, 00:23:13.936 "num_blocks": 2097152, 00:23:13.936 "uuid": "f5064599-2aa9-4794-b480-b25ac563a71c", 00:23:13.936 "assigned_rate_limits": { 00:23:13.936 "rw_ios_per_sec": 0, 00:23:13.936 "rw_mbytes_per_sec": 0, 00:23:13.936 "r_mbytes_per_sec": 0, 00:23:13.936 "w_mbytes_per_sec": 0 00:23:13.936 }, 00:23:13.936 "claimed": false, 00:23:13.936 "zoned": false, 00:23:13.936 "supported_io_types": { 00:23:13.936 "read": true, 00:23:13.936 "write": true, 00:23:13.936 "unmap": false, 00:23:13.936 "write_zeroes": true, 00:23:13.936 "flush": true, 00:23:13.936 "reset": true, 00:23:13.936 "compare": true, 00:23:13.936 "compare_and_write": true, 00:23:13.936 "abort": true, 00:23:13.936 "nvme_admin": true, 00:23:13.936 "nvme_io": true 00:23:13.936 }, 00:23:13.936 "memory_domains": [ 00:23:13.936 { 00:23:13.936 "dma_device_id": "system", 00:23:13.936 "dma_device_type": 1 00:23:13.936 } 00:23:13.936 ], 00:23:13.936 "driver_specific": { 00:23:13.936 "nvme": [ 00:23:13.936 { 00:23:13.936 "trid": { 00:23:13.936 "trtype": "TCP", 00:23:13.936 "adrfam": "IPv4", 00:23:13.936 "traddr": "10.0.0.2", 00:23:13.936 "trsvcid": "4420", 00:23:13.936 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.936 }, 00:23:13.936 "ctrlr_data": { 00:23:13.936 "cntlid": 2, 00:23:13.936 "vendor_id": "0x8086", 00:23:13.936 "model_number": "SPDK bdev Controller", 00:23:13.936 "serial_number": "00000000000000000000", 00:23:13.936 "firmware_revision": "24.05", 00:23:13.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.936 "oacs": { 00:23:13.936 "security": 0, 00:23:13.936 "format": 0, 00:23:13.936 "firmware": 0, 00:23:13.936 "ns_manage": 0 00:23:13.936 }, 00:23:13.936 "multi_ctrlr": true, 00:23:13.936 "ana_reporting": false 00:23:13.936 }, 00:23:13.936 "vs": { 00:23:13.936 "nvme_version": "1.3" 00:23:13.936 }, 00:23:13.936 "ns_data": { 00:23:13.936 "id": 1, 00:23:13.936 "can_share": true 00:23:13.936 } 00:23:13.936 } 00:23:13.936 ], 00:23:13.936 "mp_policy": "active_passive" 00:23:13.936 } 00:23:13.936 } 00:23:13.936 ] 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xaWAwUc8cq 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xaWAwUc8cq 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.936 [2024-05-15 16:01:12.293426] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.936 [2024-05-15 16:01:12.293546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xaWAwUc8cq 00:23:13.936 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.937 [2024-05-15 16:01:12.301447] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xaWAwUc8cq 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.937 [2024-05-15 16:01:12.309464] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.937 [2024-05-15 16:01:12.309502] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.937 nvme0n1 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.937 [ 00:23:13.937 { 00:23:13.937 "name": "nvme0n1", 00:23:13.937 "aliases": [ 00:23:13.937 "f5064599-2aa9-4794-b480-b25ac563a71c" 00:23:13.937 ], 00:23:13.937 "product_name": "NVMe disk", 00:23:13.937 "block_size": 512, 00:23:13.937 "num_blocks": 2097152, 00:23:13.937 "uuid": "f5064599-2aa9-4794-b480-b25ac563a71c", 00:23:13.937 "assigned_rate_limits": { 00:23:13.937 "rw_ios_per_sec": 0, 00:23:13.937 "rw_mbytes_per_sec": 0, 00:23:13.937 "r_mbytes_per_sec": 0, 00:23:13.937 "w_mbytes_per_sec": 0 00:23:13.937 }, 00:23:13.937 "claimed": false, 00:23:13.937 "zoned": false, 00:23:13.937 "supported_io_types": { 00:23:13.937 "read": true, 00:23:13.937 "write": true, 00:23:13.937 "unmap": false, 00:23:13.937 "write_zeroes": true, 00:23:13.937 "flush": true, 00:23:13.937 "reset": true, 00:23:13.937 "compare": true, 00:23:13.937 "compare_and_write": true, 00:23:13.937 "abort": true, 00:23:13.937 "nvme_admin": true, 00:23:13.937 "nvme_io": true 00:23:13.937 }, 00:23:13.937 "memory_domains": [ 00:23:13.937 { 00:23:13.937 "dma_device_id": "system", 00:23:13.937 "dma_device_type": 1 00:23:13.937 } 00:23:13.937 ], 00:23:13.937 "driver_specific": { 00:23:13.937 "nvme": [ 00:23:13.937 { 00:23:13.937 "trid": { 00:23:13.937 "trtype": "TCP", 00:23:13.937 "adrfam": "IPv4", 00:23:13.937 "traddr": "10.0.0.2", 00:23:13.937 "trsvcid": "4421", 00:23:13.937 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.937 }, 00:23:13.937 "ctrlr_data": { 00:23:13.937 "cntlid": 3, 00:23:13.937 "vendor_id": "0x8086", 00:23:13.937 "model_number": "SPDK bdev Controller", 00:23:13.937 "serial_number": "00000000000000000000", 00:23:13.937 "firmware_revision": "24.05", 00:23:13.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.937 "oacs": { 00:23:13.937 "security": 0, 00:23:13.937 "format": 0, 00:23:13.937 "firmware": 0, 00:23:13.937 "ns_manage": 0 00:23:13.937 }, 00:23:13.937 "multi_ctrlr": true, 00:23:13.937 "ana_reporting": false 00:23:13.937 }, 00:23:13.937 "vs": { 00:23:13.937 "nvme_version": "1.3" 00:23:13.937 }, 00:23:13.937 "ns_data": { 00:23:13.937 "id": 1, 00:23:13.937 "can_share": true 00:23:13.937 } 00:23:13.937 } 00:23:13.937 ], 00:23:13.937 "mp_policy": "active_passive" 00:23:13.937 } 00:23:13.937 } 00:23:13.937 ] 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.xaWAwUc8cq 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.937 rmmod nvme_tcp 00:23:13.937 rmmod nvme_fabrics 00:23:13.937 rmmod nvme_keyring 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3836612 ']' 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3836612 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3836612 ']' 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3836612 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.937 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3836612 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3836612' 00:23:14.197 killing process with pid 3836612 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3836612 00:23:14.197 [2024-05-15 16:01:12.518935] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.197 [2024-05-15 16:01:12.518960] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:14.197 [2024-05-15 16:01:12.518974] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3836612 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.197 16:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.734 16:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.734 00:23:16.734 real 0m10.407s 00:23:16.734 user 0m3.602s 00:23:16.734 sys 0m5.253s 00:23:16.734 16:01:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:16.734 16:01:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.734 ************************************ 00:23:16.734 END TEST nvmf_async_init 00:23:16.734 ************************************ 00:23:16.734 16:01:14 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:16.734 16:01:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:16.734 16:01:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:16.734 16:01:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.734 ************************************ 00:23:16.734 START TEST dma 00:23:16.734 ************************************ 00:23:16.734 16:01:14 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:16.734 * Looking for test storage... 00:23:16.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.734 16:01:14 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.734 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.735 16:01:14 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.735 16:01:14 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.735 16:01:14 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.735 16:01:14 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:14 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:14 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:14 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:16.735 16:01:14 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.735 16:01:14 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.735 16:01:15 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.735 16:01:15 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.735 16:01:15 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.735 16:01:15 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.735 16:01:15 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.735 16:01:15 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.735 16:01:15 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:16.735 16:01:15 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:16.735 00:23:16.735 real 0m0.141s 00:23:16.735 user 0m0.051s 00:23:16.735 sys 0m0.101s 00:23:16.735 16:01:15 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:16.735 16:01:15 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:16.735 ************************************ 00:23:16.735 END TEST dma 00:23:16.735 ************************************ 00:23:16.735 16:01:15 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:16.735 16:01:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:16.735 16:01:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:16.735 16:01:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.735 ************************************ 00:23:16.735 START TEST nvmf_identify 00:23:16.735 ************************************ 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:16.735 * Looking for test storage... 00:23:16.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.735 16:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.736 16:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.736 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.736 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.736 16:01:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.736 16:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:23.306 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:23.306 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:23.306 Found net devices under 0000:af:00.0: cvl_0_0 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:23.306 Found net devices under 0000:af:00.1: cvl_0_1 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.306 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:23:23.307 00:23:23.307 --- 10.0.0.2 ping statistics --- 00:23:23.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.307 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:23:23.307 00:23:23.307 --- 10.0.0.1 ping statistics --- 00:23:23.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.307 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3840621 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3840621 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3840621 ']' 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.307 16:01:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.307 [2024-05-15 16:01:21.484347] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:23.307 [2024-05-15 16:01:21.484392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.307 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.307 [2024-05-15 16:01:21.557511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.307 [2024-05-15 16:01:21.634421] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.307 [2024-05-15 16:01:21.634456] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.307 [2024-05-15 16:01:21.634466] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.307 [2024-05-15 16:01:21.634474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.307 [2024-05-15 16:01:21.634481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.307 [2024-05-15 16:01:21.634569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.307 [2024-05-15 16:01:21.634660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.307 [2024-05-15 16:01:21.634719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.307 [2024-05-15 16:01:21.634721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 [2024-05-15 16:01:22.315009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 Malloc0 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 [2024-05-15 16:01:22.413658] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:23.875 [2024-05-15 16:01:22.413906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.875 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.875 [ 00:23:23.875 { 00:23:23.875 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:23.875 "subtype": "Discovery", 00:23:23.875 "listen_addresses": [ 00:23:23.875 { 00:23:23.875 "trtype": "TCP", 00:23:23.875 "adrfam": "IPv4", 00:23:23.875 "traddr": "10.0.0.2", 00:23:23.875 "trsvcid": "4420" 00:23:23.875 } 00:23:23.875 ], 00:23:23.875 "allow_any_host": true, 00:23:23.875 "hosts": [] 00:23:24.135 }, 00:23:24.135 { 00:23:24.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.135 "subtype": "NVMe", 00:23:24.135 "listen_addresses": [ 00:23:24.135 { 00:23:24.135 "trtype": "TCP", 00:23:24.135 "adrfam": "IPv4", 00:23:24.135 "traddr": "10.0.0.2", 00:23:24.135 "trsvcid": "4420" 00:23:24.135 } 00:23:24.135 ], 00:23:24.135 "allow_any_host": true, 00:23:24.135 "hosts": [], 00:23:24.135 "serial_number": "SPDK00000000000001", 00:23:24.135 "model_number": "SPDK bdev Controller", 00:23:24.135 "max_namespaces": 32, 00:23:24.135 "min_cntlid": 1, 00:23:24.135 "max_cntlid": 65519, 00:23:24.135 "namespaces": [ 00:23:24.135 { 00:23:24.135 "nsid": 1, 00:23:24.135 "bdev_name": "Malloc0", 00:23:24.135 "name": "Malloc0", 00:23:24.135 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:24.135 "eui64": "ABCDEF0123456789", 00:23:24.135 "uuid": "95015d80-ccd0-4474-bccd-c8a287ea8e02" 00:23:24.135 } 00:23:24.136 ] 00:23:24.136 } 00:23:24.136 ] 00:23:24.136 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.136 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:24.136 [2024-05-15 16:01:22.468986] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:24.136 [2024-05-15 16:01:22.469025] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840789 ] 00:23:24.136 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.136 [2024-05-15 16:01:22.500099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:24.136 [2024-05-15 16:01:22.500148] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:24.136 [2024-05-15 16:01:22.500154] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:24.136 [2024-05-15 16:01:22.500168] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:24.136 [2024-05-15 16:01:22.500178] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:24.136 [2024-05-15 16:01:22.500677] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:24.136 [2024-05-15 16:01:22.500706] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c0cca0 0 00:23:24.136 [2024-05-15 16:01:22.511200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:24.136 [2024-05-15 16:01:22.511226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:24.136 [2024-05-15 16:01:22.511232] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:24.136 [2024-05-15 16:01:22.511237] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:24.136 [2024-05-15 16:01:22.511281] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.511288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.511293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.511308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:24.136 [2024-05-15 16:01:22.511326] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.519203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.519212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.519217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519223] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.519237] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:24.136 [2024-05-15 16:01:22.519245] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:24.136 [2024-05-15 16:01:22.519252] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:24.136 [2024-05-15 16:01:22.519266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519275] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.519283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.136 [2024-05-15 16:01:22.519297] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.519505] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.519516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.519521] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.519534] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:24.136 [2024-05-15 16:01:22.519544] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:24.136 [2024-05-15 16:01:22.519552] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.519571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.136 [2024-05-15 16:01:22.519589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.519717] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.519724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.519729] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519734] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.519741] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:24.136 [2024-05-15 16:01:22.519751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:24.136 [2024-05-15 16:01:22.519760] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519769] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.519776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.136 [2024-05-15 16:01:22.519789] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.519917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.519924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.519928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519933] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.519940] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:24.136 [2024-05-15 16:01:22.519951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519956] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.519961] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.519968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.136 [2024-05-15 16:01:22.519981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.520105] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.520113] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.520117] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520122] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.520129] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:24.136 [2024-05-15 16:01:22.520135] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:24.136 [2024-05-15 16:01:22.520145] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:24.136 [2024-05-15 16:01:22.520252] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:24.136 [2024-05-15 16:01:22.520258] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:24.136 [2024-05-15 16:01:22.520269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520281] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.520289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.136 [2024-05-15 16:01:22.520303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.520432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.520439] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.520444] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520449] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.520456] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:24.136 [2024-05-15 16:01:22.520467] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520476] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.520483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.136 [2024-05-15 16:01:22.520496] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.520625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.520632] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.520637] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520641] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.520648] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:24.136 [2024-05-15 16:01:22.520654] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:24.136 [2024-05-15 16:01:22.520664] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:24.136 [2024-05-15 16:01:22.520675] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:24.136 [2024-05-15 16:01:22.520685] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520690] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.520698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.136 [2024-05-15 16:01:22.520711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.520873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.136 [2024-05-15 16:01:22.520880] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.136 [2024-05-15 16:01:22.520885] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.520891] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c0cca0): datao=0, datal=4096, cccid=0 00:23:24.136 [2024-05-15 16:01:22.520898] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76980) on tqpair(0x1c0cca0): expected_datao=0, payload_size=4096 00:23:24.136 [2024-05-15 16:01:22.520904] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.521123] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.521129] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.561404] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.561409] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561414] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.561424] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:24.136 [2024-05-15 16:01:22.561431] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:24.136 [2024-05-15 16:01:22.561437] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:24.136 [2024-05-15 16:01:22.561443] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:24.136 [2024-05-15 16:01:22.561449] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:24.136 [2024-05-15 16:01:22.561455] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:24.136 [2024-05-15 16:01:22.561470] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:24.136 [2024-05-15 16:01:22.561481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561486] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.561499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:24.136 [2024-05-15 16:01:22.561514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.136 [2024-05-15 16:01:22.561644] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.136 [2024-05-15 16:01:22.561652] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.136 [2024-05-15 16:01:22.561656] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561661] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76980) on tqpair=0x1c0cca0 00:23:24.136 [2024-05-15 16:01:22.561674] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.561690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.136 [2024-05-15 16:01:22.561698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.561714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.136 [2024-05-15 16:01:22.561721] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.136 [2024-05-15 16:01:22.561730] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c0cca0) 00:23:24.136 [2024-05-15 16:01:22.561736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.137 [2024-05-15 16:01:22.561743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.561750] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.561755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.137 [2024-05-15 16:01:22.561761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.137 [2024-05-15 16:01:22.561767] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:24.137 [2024-05-15 16:01:22.561778] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:24.137 [2024-05-15 16:01:22.561785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.561790] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c0cca0) 00:23:24.137 [2024-05-15 16:01:22.561797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-05-15 16:01:22.561812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76980, cid 0, qid 0 00:23:24.137 [2024-05-15 16:01:22.561818] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76ae0, cid 1, qid 0 00:23:24.137 [2024-05-15 16:01:22.561823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76c40, cid 2, qid 0 00:23:24.137 [2024-05-15 16:01:22.561829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.137 [2024-05-15 16:01:22.561834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f00, cid 4, qid 0 00:23:24.137 [2024-05-15 16:01:22.561996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.137 [2024-05-15 16:01:22.562003] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.137 [2024-05-15 16:01:22.562008] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562013] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76f00) on tqpair=0x1c0cca0 00:23:24.137 [2024-05-15 16:01:22.562023] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:24.137 [2024-05-15 16:01:22.562030] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:24.137 [2024-05-15 16:01:22.562043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562048] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c0cca0) 00:23:24.137 [2024-05-15 16:01:22.562055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-05-15 16:01:22.562068] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f00, cid 4, qid 0 00:23:24.137 [2024-05-15 16:01:22.562216] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.137 [2024-05-15 16:01:22.562224] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.137 [2024-05-15 16:01:22.562229] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562234] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c0cca0): datao=0, datal=4096, cccid=4 00:23:24.137 [2024-05-15 16:01:22.562240] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76f00) on tqpair(0x1c0cca0): expected_datao=0, payload_size=4096 00:23:24.137 [2024-05-15 16:01:22.562246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562253] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562258] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.137 [2024-05-15 16:01:22.562505] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.137 [2024-05-15 16:01:22.562509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562516] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76f00) on tqpair=0x1c0cca0 00:23:24.137 [2024-05-15 16:01:22.562532] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:24.137 [2024-05-15 16:01:22.562560] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c0cca0) 00:23:24.137 [2024-05-15 16:01:22.562573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-05-15 16:01:22.562580] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c0cca0) 00:23:24.137 [2024-05-15 16:01:22.562596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.137 [2024-05-15 16:01:22.562615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f00, cid 4, qid 0 00:23:24.137 [2024-05-15 16:01:22.562621] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c77060, cid 5, qid 0 00:23:24.137 [2024-05-15 16:01:22.562782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.137 [2024-05-15 16:01:22.562790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.137 [2024-05-15 16:01:22.562794] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562799] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c0cca0): datao=0, datal=1024, cccid=4 00:23:24.137 [2024-05-15 16:01:22.562805] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76f00) on tqpair(0x1c0cca0): expected_datao=0, payload_size=1024 00:23:24.137 [2024-05-15 16:01:22.562810] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562817] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562822] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.137 [2024-05-15 16:01:22.562834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.137 [2024-05-15 16:01:22.562838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.562843] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c77060) on tqpair=0x1c0cca0 00:23:24.137 [2024-05-15 16:01:22.607201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.137 [2024-05-15 16:01:22.607213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.137 [2024-05-15 16:01:22.607217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.607222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76f00) on tqpair=0x1c0cca0 00:23:24.137 [2024-05-15 16:01:22.607238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.607243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c0cca0) 00:23:24.137 [2024-05-15 16:01:22.607252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-05-15 16:01:22.607271] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f00, cid 4, qid 0 00:23:24.137 [2024-05-15 16:01:22.607494] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.137 [2024-05-15 16:01:22.607503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.137 [2024-05-15 16:01:22.607507] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.607512] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c0cca0): datao=0, datal=3072, cccid=4 00:23:24.137 [2024-05-15 16:01:22.607522] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76f00) on tqpair(0x1c0cca0): expected_datao=0, payload_size=3072 00:23:24.137 [2024-05-15 16:01:22.607528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.607746] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.607751] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.648396] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.137 [2024-05-15 16:01:22.648409] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.137 [2024-05-15 16:01:22.648414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.648419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76f00) on tqpair=0x1c0cca0 00:23:24.137 [2024-05-15 16:01:22.648432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.648437] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c0cca0) 00:23:24.137 [2024-05-15 16:01:22.648445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-05-15 16:01:22.648464] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76f00, cid 4, qid 0 00:23:24.137 [2024-05-15 16:01:22.648819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.137 [2024-05-15 16:01:22.648826] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.137 [2024-05-15 16:01:22.648831] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.648835] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c0cca0): datao=0, datal=8, cccid=4 00:23:24.137 [2024-05-15 16:01:22.648841] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76f00) on tqpair(0x1c0cca0): expected_datao=0, payload_size=8 00:23:24.137 [2024-05-15 16:01:22.648847] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.648854] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.648859] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.689370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.137 [2024-05-15 16:01:22.689384] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.137 [2024-05-15 16:01:22.689388] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.137 [2024-05-15 16:01:22.689394] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76f00) on tqpair=0x1c0cca0 00:23:24.137 ===================================================== 00:23:24.137 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:24.137 ===================================================== 00:23:24.137 Controller Capabilities/Features 00:23:24.137 ================================ 00:23:24.137 Vendor ID: 0000 00:23:24.137 Subsystem Vendor ID: 0000 00:23:24.137 Serial Number: .................... 00:23:24.137 Model Number: ........................................ 00:23:24.137 Firmware Version: 24.05 00:23:24.137 Recommended Arb Burst: 0 00:23:24.137 IEEE OUI Identifier: 00 00 00 00:23:24.137 Multi-path I/O 00:23:24.137 May have multiple subsystem ports: No 00:23:24.137 May have multiple controllers: No 00:23:24.137 Associated with SR-IOV VF: No 00:23:24.137 Max Data Transfer Size: 131072 00:23:24.137 Max Number of Namespaces: 0 00:23:24.137 Max Number of I/O Queues: 1024 00:23:24.137 NVMe Specification Version (VS): 1.3 00:23:24.137 NVMe Specification Version (Identify): 1.3 00:23:24.137 Maximum Queue Entries: 128 00:23:24.137 Contiguous Queues Required: Yes 00:23:24.137 Arbitration Mechanisms Supported 00:23:24.137 Weighted Round Robin: Not Supported 00:23:24.137 Vendor Specific: Not Supported 00:23:24.137 Reset Timeout: 15000 ms 00:23:24.137 Doorbell Stride: 4 bytes 00:23:24.137 NVM Subsystem Reset: Not Supported 00:23:24.137 Command Sets Supported 00:23:24.137 NVM Command Set: Supported 00:23:24.137 Boot Partition: Not Supported 00:23:24.137 Memory Page Size Minimum: 4096 bytes 00:23:24.137 Memory Page Size Maximum: 4096 bytes 00:23:24.137 Persistent Memory Region: Not Supported 00:23:24.137 Optional Asynchronous Events Supported 00:23:24.137 Namespace Attribute Notices: Not Supported 00:23:24.137 Firmware Activation Notices: Not Supported 00:23:24.137 ANA Change Notices: Not Supported 00:23:24.137 PLE Aggregate Log Change Notices: Not Supported 00:23:24.137 LBA Status Info Alert Notices: Not Supported 00:23:24.137 EGE Aggregate Log Change Notices: Not Supported 00:23:24.137 Normal NVM Subsystem Shutdown event: Not Supported 00:23:24.137 Zone Descriptor Change Notices: Not Supported 00:23:24.137 Discovery Log Change Notices: Supported 00:23:24.137 Controller Attributes 00:23:24.137 128-bit Host Identifier: Not Supported 00:23:24.137 Non-Operational Permissive Mode: Not Supported 00:23:24.137 NVM Sets: Not Supported 00:23:24.137 Read Recovery Levels: Not Supported 00:23:24.137 Endurance Groups: Not Supported 00:23:24.137 Predictable Latency Mode: Not Supported 00:23:24.137 Traffic Based Keep ALive: Not Supported 00:23:24.137 Namespace Granularity: Not Supported 00:23:24.137 SQ Associations: Not Supported 00:23:24.137 UUID List: Not Supported 00:23:24.137 Multi-Domain Subsystem: Not Supported 00:23:24.137 Fixed Capacity Management: Not Supported 00:23:24.137 Variable Capacity Management: Not Supported 00:23:24.137 Delete Endurance Group: Not Supported 00:23:24.137 Delete NVM Set: Not Supported 00:23:24.137 Extended LBA Formats Supported: Not Supported 00:23:24.137 Flexible Data Placement Supported: Not Supported 00:23:24.137 00:23:24.137 Controller Memory Buffer Support 00:23:24.137 ================================ 00:23:24.137 Supported: No 00:23:24.137 00:23:24.137 Persistent Memory Region Support 00:23:24.137 ================================ 00:23:24.137 Supported: No 00:23:24.137 00:23:24.137 Admin Command Set Attributes 00:23:24.137 ============================ 00:23:24.137 Security Send/Receive: Not Supported 00:23:24.137 Format NVM: Not Supported 00:23:24.137 Firmware Activate/Download: Not Supported 00:23:24.137 Namespace Management: Not Supported 00:23:24.137 Device Self-Test: Not Supported 00:23:24.137 Directives: Not Supported 00:23:24.137 NVMe-MI: Not Supported 00:23:24.137 Virtualization Management: Not Supported 00:23:24.137 Doorbell Buffer Config: Not Supported 00:23:24.137 Get LBA Status Capability: Not Supported 00:23:24.137 Command & Feature Lockdown Capability: Not Supported 00:23:24.137 Abort Command Limit: 1 00:23:24.137 Async Event Request Limit: 4 00:23:24.137 Number of Firmware Slots: N/A 00:23:24.137 Firmware Slot 1 Read-Only: N/A 00:23:24.137 Firmware Activation Without Reset: N/A 00:23:24.137 Multiple Update Detection Support: N/A 00:23:24.137 Firmware Update Granularity: No Information Provided 00:23:24.137 Per-Namespace SMART Log: No 00:23:24.137 Asymmetric Namespace Access Log Page: Not Supported 00:23:24.138 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:24.138 Command Effects Log Page: Not Supported 00:23:24.138 Get Log Page Extended Data: Supported 00:23:24.138 Telemetry Log Pages: Not Supported 00:23:24.138 Persistent Event Log Pages: Not Supported 00:23:24.138 Supported Log Pages Log Page: May Support 00:23:24.138 Commands Supported & Effects Log Page: Not Supported 00:23:24.138 Feature Identifiers & Effects Log Page:May Support 00:23:24.138 NVMe-MI Commands & Effects Log Page: May Support 00:23:24.138 Data Area 4 for Telemetry Log: Not Supported 00:23:24.138 Error Log Page Entries Supported: 128 00:23:24.138 Keep Alive: Not Supported 00:23:24.138 00:23:24.138 NVM Command Set Attributes 00:23:24.138 ========================== 00:23:24.138 Submission Queue Entry Size 00:23:24.138 Max: 1 00:23:24.138 Min: 1 00:23:24.138 Completion Queue Entry Size 00:23:24.138 Max: 1 00:23:24.138 Min: 1 00:23:24.138 Number of Namespaces: 0 00:23:24.138 Compare Command: Not Supported 00:23:24.138 Write Uncorrectable Command: Not Supported 00:23:24.138 Dataset Management Command: Not Supported 00:23:24.138 Write Zeroes Command: Not Supported 00:23:24.138 Set Features Save Field: Not Supported 00:23:24.138 Reservations: Not Supported 00:23:24.138 Timestamp: Not Supported 00:23:24.138 Copy: Not Supported 00:23:24.138 Volatile Write Cache: Not Present 00:23:24.138 Atomic Write Unit (Normal): 1 00:23:24.138 Atomic Write Unit (PFail): 1 00:23:24.138 Atomic Compare & Write Unit: 1 00:23:24.138 Fused Compare & Write: Supported 00:23:24.138 Scatter-Gather List 00:23:24.138 SGL Command Set: Supported 00:23:24.138 SGL Keyed: Supported 00:23:24.138 SGL Bit Bucket Descriptor: Not Supported 00:23:24.138 SGL Metadata Pointer: Not Supported 00:23:24.138 Oversized SGL: Not Supported 00:23:24.138 SGL Metadata Address: Not Supported 00:23:24.138 SGL Offset: Supported 00:23:24.138 Transport SGL Data Block: Not Supported 00:23:24.138 Replay Protected Memory Block: Not Supported 00:23:24.138 00:23:24.138 Firmware Slot Information 00:23:24.138 ========================= 00:23:24.138 Active slot: 0 00:23:24.138 00:23:24.138 00:23:24.138 Error Log 00:23:24.138 ========= 00:23:24.138 00:23:24.138 Active Namespaces 00:23:24.138 ================= 00:23:24.138 Discovery Log Page 00:23:24.138 ================== 00:23:24.138 Generation Counter: 2 00:23:24.138 Number of Records: 2 00:23:24.138 Record Format: 0 00:23:24.138 00:23:24.138 Discovery Log Entry 0 00:23:24.138 ---------------------- 00:23:24.138 Transport Type: 3 (TCP) 00:23:24.138 Address Family: 1 (IPv4) 00:23:24.138 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:24.138 Entry Flags: 00:23:24.138 Duplicate Returned Information: 1 00:23:24.138 Explicit Persistent Connection Support for Discovery: 1 00:23:24.138 Transport Requirements: 00:23:24.138 Secure Channel: Not Required 00:23:24.138 Port ID: 0 (0x0000) 00:23:24.138 Controller ID: 65535 (0xffff) 00:23:24.138 Admin Max SQ Size: 128 00:23:24.138 Transport Service Identifier: 4420 00:23:24.138 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:24.138 Transport Address: 10.0.0.2 00:23:24.138 Discovery Log Entry 1 00:23:24.138 ---------------------- 00:23:24.138 Transport Type: 3 (TCP) 00:23:24.138 Address Family: 1 (IPv4) 00:23:24.138 Subsystem Type: 2 (NVM Subsystem) 00:23:24.138 Entry Flags: 00:23:24.138 Duplicate Returned Information: 0 00:23:24.138 Explicit Persistent Connection Support for Discovery: 0 00:23:24.138 Transport Requirements: 00:23:24.138 Secure Channel: Not Required 00:23:24.138 Port ID: 0 (0x0000) 00:23:24.138 Controller ID: 65535 (0xffff) 00:23:24.138 Admin Max SQ Size: 128 00:23:24.138 Transport Service Identifier: 4420 00:23:24.138 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:24.138 Transport Address: 10.0.0.2 [2024-05-15 16:01:22.689485] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:24.138 [2024-05-15 16:01:22.689502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.138 [2024-05-15 16:01:22.689510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.138 [2024-05-15 16:01:22.689517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.138 [2024-05-15 16:01:22.689524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.138 [2024-05-15 16:01:22.689534] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.689539] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.689543] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.689551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.689567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.689873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.689882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.689886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.689891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.689900] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.689904] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.689909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.689916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.689932] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.690083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.690091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.690095] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.690108] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:24.138 [2024-05-15 16:01:22.690114] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:24.138 [2024-05-15 16:01:22.690125] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690130] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.690142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.690155] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.690454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.690460] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.690465] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690470] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.690482] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690487] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690492] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.690499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.690511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.690641] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.690648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.690652] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690657] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.690669] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.690685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.690701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.690830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.690837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.690842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690847] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.690858] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.690868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.690875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.690887] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.691018] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.691025] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.691030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.691034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.691046] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.691051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.691056] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.691063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.691075] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.694366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.694379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.694384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.694389] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.694403] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.694408] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.694413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c0cca0) 00:23:24.138 [2024-05-15 16:01:22.694420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-05-15 16:01:22.694435] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76da0, cid 3, qid 0 00:23:24.138 [2024-05-15 16:01:22.694584] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.138 [2024-05-15 16:01:22.694591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.138 [2024-05-15 16:01:22.694596] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.138 [2024-05-15 16:01:22.694601] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c76da0) on tqpair=0x1c0cca0 00:23:24.138 [2024-05-15 16:01:22.694611] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:23:24.400 00:23:24.400 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:24.400 [2024-05-15 16:01:22.735094] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:24.401 [2024-05-15 16:01:22.735141] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840903 ] 00:23:24.401 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.401 [2024-05-15 16:01:22.766247] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:24.401 [2024-05-15 16:01:22.766285] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:24.401 [2024-05-15 16:01:22.766291] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:24.401 [2024-05-15 16:01:22.766303] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:24.401 [2024-05-15 16:01:22.766311] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:24.401 [2024-05-15 16:01:22.766846] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:24.401 [2024-05-15 16:01:22.766867] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x62aca0 0 00:23:24.401 [2024-05-15 16:01:22.773197] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:24.401 [2024-05-15 16:01:22.773218] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:24.401 [2024-05-15 16:01:22.773223] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:24.401 [2024-05-15 16:01:22.773228] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:24.401 [2024-05-15 16:01:22.773264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.773270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.773274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.773287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:24.401 [2024-05-15 16:01:22.773304] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.781205] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.401 [2024-05-15 16:01:22.781216] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.401 [2024-05-15 16:01:22.781221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781226] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.401 [2024-05-15 16:01:22.781238] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:24.401 [2024-05-15 16:01:22.781245] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:24.401 [2024-05-15 16:01:22.781252] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:24.401 [2024-05-15 16:01:22.781265] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781276] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.781285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.401 [2024-05-15 16:01:22.781300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.781565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.401 [2024-05-15 16:01:22.781575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.401 [2024-05-15 16:01:22.781579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781587] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.401 [2024-05-15 16:01:22.781593] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:24.401 [2024-05-15 16:01:22.781604] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:24.401 [2024-05-15 16:01:22.781613] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781622] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.781631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.401 [2024-05-15 16:01:22.781644] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.781771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.401 [2024-05-15 16:01:22.781779] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.401 [2024-05-15 16:01:22.781784] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781788] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.401 [2024-05-15 16:01:22.781795] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:24.401 [2024-05-15 16:01:22.781805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:24.401 [2024-05-15 16:01:22.781813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.781823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.781831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.401 [2024-05-15 16:01:22.781843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.782197] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.401 [2024-05-15 16:01:22.782204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.401 [2024-05-15 16:01:22.782209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.782213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.401 [2024-05-15 16:01:22.782219] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:24.401 [2024-05-15 16:01:22.782230] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.782235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.782239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.782246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.401 [2024-05-15 16:01:22.782258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.782598] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.401 [2024-05-15 16:01:22.782605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.401 [2024-05-15 16:01:22.782609] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.782614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.401 [2024-05-15 16:01:22.782619] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:24.401 [2024-05-15 16:01:22.782625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:24.401 [2024-05-15 16:01:22.782637] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:24.401 [2024-05-15 16:01:22.782743] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:24.401 [2024-05-15 16:01:22.782748] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:24.401 [2024-05-15 16:01:22.782757] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.782762] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.782766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.782773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.401 [2024-05-15 16:01:22.782785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.783070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.401 [2024-05-15 16:01:22.783079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.401 [2024-05-15 16:01:22.783084] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.783088] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.401 [2024-05-15 16:01:22.783094] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:24.401 [2024-05-15 16:01:22.783106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.783111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.783116] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.783123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.401 [2024-05-15 16:01:22.783136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.783271] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.401 [2024-05-15 16:01:22.783281] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.401 [2024-05-15 16:01:22.783285] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.783290] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.401 [2024-05-15 16:01:22.783295] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:24.401 [2024-05-15 16:01:22.783302] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:24.401 [2024-05-15 16:01:22.783312] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:24.401 [2024-05-15 16:01:22.783322] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:24.401 [2024-05-15 16:01:22.783332] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.783337] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.401 [2024-05-15 16:01:22.783345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.401 [2024-05-15 16:01:22.783359] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.401 [2024-05-15 16:01:22.783726] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.401 [2024-05-15 16:01:22.783735] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.401 [2024-05-15 16:01:22.783740] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.401 [2024-05-15 16:01:22.783744] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=4096, cccid=0 00:23:24.402 [2024-05-15 16:01:22.783750] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x694980) on tqpair(0x62aca0): expected_datao=0, payload_size=4096 00:23:24.402 [2024-05-15 16:01:22.783756] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.783764] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.783769] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.783993] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.402 [2024-05-15 16:01:22.784000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.402 [2024-05-15 16:01:22.784004] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784009] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.402 [2024-05-15 16:01:22.784017] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:24.402 [2024-05-15 16:01:22.784023] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:24.402 [2024-05-15 16:01:22.784029] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:24.402 [2024-05-15 16:01:22.784034] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:24.402 [2024-05-15 16:01:22.784039] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:24.402 [2024-05-15 16:01:22.784045] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784058] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784072] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784077] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:24.402 [2024-05-15 16:01:22.784097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.402 [2024-05-15 16:01:22.784236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.402 [2024-05-15 16:01:22.784245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.402 [2024-05-15 16:01:22.784249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784254] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694980) on tqpair=0x62aca0 00:23:24.402 [2024-05-15 16:01:22.784265] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784269] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.402 [2024-05-15 16:01:22.784288] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784293] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784298] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.402 [2024-05-15 16:01:22.784313] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784318] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.402 [2024-05-15 16:01:22.784336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784340] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.402 [2024-05-15 16:01:22.784357] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784380] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.402 [2024-05-15 16:01:22.784401] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694980, cid 0, qid 0 00:23:24.402 [2024-05-15 16:01:22.784407] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694ae0, cid 1, qid 0 00:23:24.402 [2024-05-15 16:01:22.784413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694c40, cid 2, qid 0 00:23:24.402 [2024-05-15 16:01:22.784418] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.402 [2024-05-15 16:01:22.784424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694f00, cid 4, qid 0 00:23:24.402 [2024-05-15 16:01:22.784607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.402 [2024-05-15 16:01:22.784614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.402 [2024-05-15 16:01:22.784619] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784623] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694f00) on tqpair=0x62aca0 00:23:24.402 [2024-05-15 16:01:22.784632] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:24.402 [2024-05-15 16:01:22.784638] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784649] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784656] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784663] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784668] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:24.402 [2024-05-15 16:01:22.784693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694f00, cid 4, qid 0 00:23:24.402 [2024-05-15 16:01:22.784828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.402 [2024-05-15 16:01:22.784838] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.402 [2024-05-15 16:01:22.784842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784847] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694f00) on tqpair=0x62aca0 00:23:24.402 [2024-05-15 16:01:22.784892] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784904] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:24.402 [2024-05-15 16:01:22.784912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.784917] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x62aca0) 00:23:24.402 [2024-05-15 16:01:22.784925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.402 [2024-05-15 16:01:22.784938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694f00, cid 4, qid 0 00:23:24.402 [2024-05-15 16:01:22.785081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.402 [2024-05-15 16:01:22.785089] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.402 [2024-05-15 16:01:22.785093] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.785098] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=4096, cccid=4 00:23:24.402 [2024-05-15 16:01:22.785104] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x694f00) on tqpair(0x62aca0): expected_datao=0, payload_size=4096 00:23:24.402 [2024-05-15 16:01:22.785109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.785117] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.785122] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.402 [2024-05-15 16:01:22.789199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.402 [2024-05-15 16:01:22.789206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.402 [2024-05-15 16:01:22.789211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789216] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694f00) on tqpair=0x62aca0 00:23:24.403 [2024-05-15 16:01:22.789230] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:24.403 [2024-05-15 16:01:22.789244] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.789255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.789263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789268] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x62aca0) 00:23:24.403 [2024-05-15 16:01:22.789276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.403 [2024-05-15 16:01:22.789290] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694f00, cid 4, qid 0 00:23:24.403 [2024-05-15 16:01:22.789518] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.403 [2024-05-15 16:01:22.789525] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.403 [2024-05-15 16:01:22.789530] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789535] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=4096, cccid=4 00:23:24.403 [2024-05-15 16:01:22.789541] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x694f00) on tqpair(0x62aca0): expected_datao=0, payload_size=4096 00:23:24.403 [2024-05-15 16:01:22.789546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789739] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789745] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.403 [2024-05-15 16:01:22.789882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.403 [2024-05-15 16:01:22.789887] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694f00) on tqpair=0x62aca0 00:23:24.403 [2024-05-15 16:01:22.789902] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.789914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.789923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.789927] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x62aca0) 00:23:24.403 [2024-05-15 16:01:22.789935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.403 [2024-05-15 16:01:22.789949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694f00, cid 4, qid 0 00:23:24.403 [2024-05-15 16:01:22.790082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.403 [2024-05-15 16:01:22.790089] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.403 [2024-05-15 16:01:22.790093] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790098] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=4096, cccid=4 00:23:24.403 [2024-05-15 16:01:22.790103] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x694f00) on tqpair(0x62aca0): expected_datao=0, payload_size=4096 00:23:24.403 [2024-05-15 16:01:22.790109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790307] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790312] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.403 [2024-05-15 16:01:22.790450] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.403 [2024-05-15 16:01:22.790454] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790459] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694f00) on tqpair=0x62aca0 00:23:24.403 [2024-05-15 16:01:22.790471] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.790482] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.790490] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.790498] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.790504] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.790511] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:24.403 [2024-05-15 16:01:22.790517] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:24.403 [2024-05-15 16:01:22.790523] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:24.403 [2024-05-15 16:01:22.790542] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790547] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x62aca0) 00:23:24.403 [2024-05-15 16:01:22.790555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.403 [2024-05-15 16:01:22.790562] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790567] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790571] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x62aca0) 00:23:24.403 [2024-05-15 16:01:22.790578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.403 [2024-05-15 16:01:22.790595] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694f00, cid 4, qid 0 00:23:24.403 [2024-05-15 16:01:22.790600] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x695060, cid 5, qid 0 00:23:24.403 [2024-05-15 16:01:22.790745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.403 [2024-05-15 16:01:22.790752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.403 [2024-05-15 16:01:22.790757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694f00) on tqpair=0x62aca0 00:23:24.403 [2024-05-15 16:01:22.790769] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.403 [2024-05-15 16:01:22.790775] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.403 [2024-05-15 16:01:22.790779] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x695060) on tqpair=0x62aca0 00:23:24.403 [2024-05-15 16:01:22.790795] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.790800] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x62aca0) 00:23:24.403 [2024-05-15 16:01:22.790807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.403 [2024-05-15 16:01:22.790820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x695060, cid 5, qid 0 00:23:24.403 [2024-05-15 16:01:22.790984] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.403 [2024-05-15 16:01:22.790991] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.403 [2024-05-15 16:01:22.790995] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.791000] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x695060) on tqpair=0x62aca0 00:23:24.403 [2024-05-15 16:01:22.791011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.403 [2024-05-15 16:01:22.791016] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x62aca0) 00:23:24.403 [2024-05-15 16:01:22.791023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.404 [2024-05-15 16:01:22.791035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x695060, cid 5, qid 0 00:23:24.404 [2024-05-15 16:01:22.791365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.404 [2024-05-15 16:01:22.791372] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.404 [2024-05-15 16:01:22.791377] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.791381] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x695060) on tqpair=0x62aca0 00:23:24.404 [2024-05-15 16:01:22.791392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.791397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x62aca0) 00:23:24.404 [2024-05-15 16:01:22.791403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.404 [2024-05-15 16:01:22.791418] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x695060, cid 5, qid 0 00:23:24.404 [2024-05-15 16:01:22.791711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.404 [2024-05-15 16:01:22.791717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.404 [2024-05-15 16:01:22.791722] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.791726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x695060) on tqpair=0x62aca0 00:23:24.404 [2024-05-15 16:01:22.791740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.791745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x62aca0) 00:23:24.404 [2024-05-15 16:01:22.791752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.404 [2024-05-15 16:01:22.791759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.791764] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x62aca0) 00:23:24.404 [2024-05-15 16:01:22.791771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.404 [2024-05-15 16:01:22.791778] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.791783] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x62aca0) 00:23:24.404 [2024-05-15 16:01:22.791790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.404 [2024-05-15 16:01:22.791800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.791805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x62aca0) 00:23:24.404 [2024-05-15 16:01:22.791812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.404 [2024-05-15 16:01:22.791824] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x695060, cid 5, qid 0 00:23:24.404 [2024-05-15 16:01:22.791830] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694f00, cid 4, qid 0 00:23:24.404 [2024-05-15 16:01:22.791836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6951c0, cid 6, qid 0 00:23:24.404 [2024-05-15 16:01:22.791841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x695320, cid 7, qid 0 00:23:24.404 [2024-05-15 16:01:22.792186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.404 [2024-05-15 16:01:22.792196] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.404 [2024-05-15 16:01:22.792201] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792205] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=8192, cccid=5 00:23:24.404 [2024-05-15 16:01:22.792211] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x695060) on tqpair(0x62aca0): expected_datao=0, payload_size=8192 00:23:24.404 [2024-05-15 16:01:22.792217] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792408] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792413] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792419] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.404 [2024-05-15 16:01:22.792425] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.404 [2024-05-15 16:01:22.792430] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792434] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=512, cccid=4 00:23:24.404 [2024-05-15 16:01:22.792440] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x694f00) on tqpair(0x62aca0): expected_datao=0, payload_size=512 00:23:24.404 [2024-05-15 16:01:22.792447] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792454] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792459] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792465] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.404 [2024-05-15 16:01:22.792471] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.404 [2024-05-15 16:01:22.792475] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792480] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=512, cccid=6 00:23:24.404 [2024-05-15 16:01:22.792486] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6951c0) on tqpair(0x62aca0): expected_datao=0, payload_size=512 00:23:24.404 [2024-05-15 16:01:22.792491] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792498] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792502] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792508] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:24.404 [2024-05-15 16:01:22.792514] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:24.404 [2024-05-15 16:01:22.792519] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792523] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x62aca0): datao=0, datal=4096, cccid=7 00:23:24.404 [2024-05-15 16:01:22.792529] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x695320) on tqpair(0x62aca0): expected_datao=0, payload_size=4096 00:23:24.404 [2024-05-15 16:01:22.792534] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792541] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792546] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.404 [2024-05-15 16:01:22.792749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.404 [2024-05-15 16:01:22.792754] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x695060) on tqpair=0x62aca0 00:23:24.404 [2024-05-15 16:01:22.792771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.404 [2024-05-15 16:01:22.792777] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.404 [2024-05-15 16:01:22.792781] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792786] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694f00) on tqpair=0x62aca0 00:23:24.404 [2024-05-15 16:01:22.792795] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.404 [2024-05-15 16:01:22.792801] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.404 [2024-05-15 16:01:22.792806] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792811] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6951c0) on tqpair=0x62aca0 00:23:24.404 [2024-05-15 16:01:22.792820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.404 [2024-05-15 16:01:22.792826] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.404 [2024-05-15 16:01:22.792830] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.404 [2024-05-15 16:01:22.792835] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x695320) on tqpair=0x62aca0 00:23:24.404 ===================================================== 00:23:24.404 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.404 ===================================================== 00:23:24.405 Controller Capabilities/Features 00:23:24.405 ================================ 00:23:24.405 Vendor ID: 8086 00:23:24.405 Subsystem Vendor ID: 8086 00:23:24.405 Serial Number: SPDK00000000000001 00:23:24.405 Model Number: SPDK bdev Controller 00:23:24.405 Firmware Version: 24.05 00:23:24.405 Recommended Arb Burst: 6 00:23:24.405 IEEE OUI Identifier: e4 d2 5c 00:23:24.405 Multi-path I/O 00:23:24.405 May have multiple subsystem ports: Yes 00:23:24.405 May have multiple controllers: Yes 00:23:24.405 Associated with SR-IOV VF: No 00:23:24.405 Max Data Transfer Size: 131072 00:23:24.405 Max Number of Namespaces: 32 00:23:24.405 Max Number of I/O Queues: 127 00:23:24.405 NVMe Specification Version (VS): 1.3 00:23:24.405 NVMe Specification Version (Identify): 1.3 00:23:24.405 Maximum Queue Entries: 128 00:23:24.405 Contiguous Queues Required: Yes 00:23:24.405 Arbitration Mechanisms Supported 00:23:24.405 Weighted Round Robin: Not Supported 00:23:24.405 Vendor Specific: Not Supported 00:23:24.405 Reset Timeout: 15000 ms 00:23:24.405 Doorbell Stride: 4 bytes 00:23:24.405 NVM Subsystem Reset: Not Supported 00:23:24.405 Command Sets Supported 00:23:24.405 NVM Command Set: Supported 00:23:24.405 Boot Partition: Not Supported 00:23:24.405 Memory Page Size Minimum: 4096 bytes 00:23:24.405 Memory Page Size Maximum: 4096 bytes 00:23:24.405 Persistent Memory Region: Not Supported 00:23:24.405 Optional Asynchronous Events Supported 00:23:24.405 Namespace Attribute Notices: Supported 00:23:24.405 Firmware Activation Notices: Not Supported 00:23:24.405 ANA Change Notices: Not Supported 00:23:24.405 PLE Aggregate Log Change Notices: Not Supported 00:23:24.405 LBA Status Info Alert Notices: Not Supported 00:23:24.405 EGE Aggregate Log Change Notices: Not Supported 00:23:24.405 Normal NVM Subsystem Shutdown event: Not Supported 00:23:24.405 Zone Descriptor Change Notices: Not Supported 00:23:24.405 Discovery Log Change Notices: Not Supported 00:23:24.405 Controller Attributes 00:23:24.405 128-bit Host Identifier: Supported 00:23:24.405 Non-Operational Permissive Mode: Not Supported 00:23:24.405 NVM Sets: Not Supported 00:23:24.405 Read Recovery Levels: Not Supported 00:23:24.405 Endurance Groups: Not Supported 00:23:24.405 Predictable Latency Mode: Not Supported 00:23:24.405 Traffic Based Keep ALive: Not Supported 00:23:24.405 Namespace Granularity: Not Supported 00:23:24.405 SQ Associations: Not Supported 00:23:24.405 UUID List: Not Supported 00:23:24.405 Multi-Domain Subsystem: Not Supported 00:23:24.405 Fixed Capacity Management: Not Supported 00:23:24.405 Variable Capacity Management: Not Supported 00:23:24.405 Delete Endurance Group: Not Supported 00:23:24.405 Delete NVM Set: Not Supported 00:23:24.405 Extended LBA Formats Supported: Not Supported 00:23:24.405 Flexible Data Placement Supported: Not Supported 00:23:24.405 00:23:24.405 Controller Memory Buffer Support 00:23:24.405 ================================ 00:23:24.405 Supported: No 00:23:24.405 00:23:24.405 Persistent Memory Region Support 00:23:24.405 ================================ 00:23:24.405 Supported: No 00:23:24.405 00:23:24.405 Admin Command Set Attributes 00:23:24.405 ============================ 00:23:24.405 Security Send/Receive: Not Supported 00:23:24.405 Format NVM: Not Supported 00:23:24.405 Firmware Activate/Download: Not Supported 00:23:24.405 Namespace Management: Not Supported 00:23:24.405 Device Self-Test: Not Supported 00:23:24.405 Directives: Not Supported 00:23:24.405 NVMe-MI: Not Supported 00:23:24.405 Virtualization Management: Not Supported 00:23:24.405 Doorbell Buffer Config: Not Supported 00:23:24.405 Get LBA Status Capability: Not Supported 00:23:24.405 Command & Feature Lockdown Capability: Not Supported 00:23:24.405 Abort Command Limit: 4 00:23:24.405 Async Event Request Limit: 4 00:23:24.405 Number of Firmware Slots: N/A 00:23:24.405 Firmware Slot 1 Read-Only: N/A 00:23:24.405 Firmware Activation Without Reset: N/A 00:23:24.405 Multiple Update Detection Support: N/A 00:23:24.405 Firmware Update Granularity: No Information Provided 00:23:24.405 Per-Namespace SMART Log: No 00:23:24.405 Asymmetric Namespace Access Log Page: Not Supported 00:23:24.405 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:24.405 Command Effects Log Page: Supported 00:23:24.405 Get Log Page Extended Data: Supported 00:23:24.405 Telemetry Log Pages: Not Supported 00:23:24.405 Persistent Event Log Pages: Not Supported 00:23:24.405 Supported Log Pages Log Page: May Support 00:23:24.405 Commands Supported & Effects Log Page: Not Supported 00:23:24.405 Feature Identifiers & Effects Log Page:May Support 00:23:24.405 NVMe-MI Commands & Effects Log Page: May Support 00:23:24.405 Data Area 4 for Telemetry Log: Not Supported 00:23:24.405 Error Log Page Entries Supported: 128 00:23:24.405 Keep Alive: Supported 00:23:24.405 Keep Alive Granularity: 10000 ms 00:23:24.405 00:23:24.405 NVM Command Set Attributes 00:23:24.405 ========================== 00:23:24.405 Submission Queue Entry Size 00:23:24.405 Max: 64 00:23:24.405 Min: 64 00:23:24.405 Completion Queue Entry Size 00:23:24.405 Max: 16 00:23:24.405 Min: 16 00:23:24.405 Number of Namespaces: 32 00:23:24.405 Compare Command: Supported 00:23:24.405 Write Uncorrectable Command: Not Supported 00:23:24.405 Dataset Management Command: Supported 00:23:24.405 Write Zeroes Command: Supported 00:23:24.405 Set Features Save Field: Not Supported 00:23:24.405 Reservations: Supported 00:23:24.405 Timestamp: Not Supported 00:23:24.405 Copy: Supported 00:23:24.405 Volatile Write Cache: Present 00:23:24.405 Atomic Write Unit (Normal): 1 00:23:24.405 Atomic Write Unit (PFail): 1 00:23:24.405 Atomic Compare & Write Unit: 1 00:23:24.405 Fused Compare & Write: Supported 00:23:24.405 Scatter-Gather List 00:23:24.405 SGL Command Set: Supported 00:23:24.405 SGL Keyed: Supported 00:23:24.405 SGL Bit Bucket Descriptor: Not Supported 00:23:24.405 SGL Metadata Pointer: Not Supported 00:23:24.405 Oversized SGL: Not Supported 00:23:24.406 SGL Metadata Address: Not Supported 00:23:24.406 SGL Offset: Supported 00:23:24.406 Transport SGL Data Block: Not Supported 00:23:24.406 Replay Protected Memory Block: Not Supported 00:23:24.406 00:23:24.406 Firmware Slot Information 00:23:24.406 ========================= 00:23:24.406 Active slot: 1 00:23:24.406 Slot 1 Firmware Revision: 24.05 00:23:24.406 00:23:24.406 00:23:24.406 Commands Supported and Effects 00:23:24.406 ============================== 00:23:24.406 Admin Commands 00:23:24.406 -------------- 00:23:24.406 Get Log Page (02h): Supported 00:23:24.406 Identify (06h): Supported 00:23:24.406 Abort (08h): Supported 00:23:24.406 Set Features (09h): Supported 00:23:24.406 Get Features (0Ah): Supported 00:23:24.406 Asynchronous Event Request (0Ch): Supported 00:23:24.406 Keep Alive (18h): Supported 00:23:24.406 I/O Commands 00:23:24.406 ------------ 00:23:24.406 Flush (00h): Supported LBA-Change 00:23:24.406 Write (01h): Supported LBA-Change 00:23:24.406 Read (02h): Supported 00:23:24.406 Compare (05h): Supported 00:23:24.406 Write Zeroes (08h): Supported LBA-Change 00:23:24.406 Dataset Management (09h): Supported LBA-Change 00:23:24.406 Copy (19h): Supported LBA-Change 00:23:24.406 Unknown (79h): Supported LBA-Change 00:23:24.406 Unknown (7Ah): Supported 00:23:24.406 00:23:24.406 Error Log 00:23:24.406 ========= 00:23:24.406 00:23:24.406 Arbitration 00:23:24.406 =========== 00:23:24.406 Arbitration Burst: 1 00:23:24.406 00:23:24.406 Power Management 00:23:24.406 ================ 00:23:24.406 Number of Power States: 1 00:23:24.406 Current Power State: Power State #0 00:23:24.406 Power State #0: 00:23:24.406 Max Power: 0.00 W 00:23:24.406 Non-Operational State: Operational 00:23:24.406 Entry Latency: Not Reported 00:23:24.406 Exit Latency: Not Reported 00:23:24.406 Relative Read Throughput: 0 00:23:24.406 Relative Read Latency: 0 00:23:24.406 Relative Write Throughput: 0 00:23:24.406 Relative Write Latency: 0 00:23:24.406 Idle Power: Not Reported 00:23:24.406 Active Power: Not Reported 00:23:24.406 Non-Operational Permissive Mode: Not Supported 00:23:24.406 00:23:24.406 Health Information 00:23:24.406 ================== 00:23:24.406 Critical Warnings: 00:23:24.406 Available Spare Space: OK 00:23:24.406 Temperature: OK 00:23:24.406 Device Reliability: OK 00:23:24.406 Read Only: No 00:23:24.406 Volatile Memory Backup: OK 00:23:24.406 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:24.406 Temperature Threshold: [2024-05-15 16:01:22.792921] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.406 [2024-05-15 16:01:22.792927] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x62aca0) 00:23:24.406 [2024-05-15 16:01:22.792934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.406 [2024-05-15 16:01:22.792949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x695320, cid 7, qid 0 00:23:24.406 [2024-05-15 16:01:22.793185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.406 [2024-05-15 16:01:22.797198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.406 [2024-05-15 16:01:22.797204] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.406 [2024-05-15 16:01:22.797209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x695320) on tqpair=0x62aca0 00:23:24.406 [2024-05-15 16:01:22.797243] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:24.406 [2024-05-15 16:01:22.797256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.406 [2024-05-15 16:01:22.797264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.406 [2024-05-15 16:01:22.797271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.406 [2024-05-15 16:01:22.797278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.406 [2024-05-15 16:01:22.797287] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.406 [2024-05-15 16:01:22.797292] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.406 [2024-05-15 16:01:22.797297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.406 [2024-05-15 16:01:22.797305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.797320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.797543] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.797550] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.797555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.797559] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.797567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.797572] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.797577] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.797584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.797600] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.797767] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.797774] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.797778] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.797783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.797788] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:24.407 [2024-05-15 16:01:22.797794] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:24.407 [2024-05-15 16:01:22.797805] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.797810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.797814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.797822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.797838] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.798147] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.798154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.798158] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798163] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.798174] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798179] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798183] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.798195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.798208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.798367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.798374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.798379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798383] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.798394] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798399] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798404] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.798411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.798423] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.798554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.798561] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.798566] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798571] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.798581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.798598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.798609] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.798811] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.798817] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.798822] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.798836] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798841] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.798846] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.798853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.798864] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.799075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.799081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.799086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.799091] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.799101] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.799105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.799110] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.799117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.799128] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.799272] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.799280] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.407 [2024-05-15 16:01:22.799285] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.799290] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.407 [2024-05-15 16:01:22.799300] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.799305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.407 [2024-05-15 16:01:22.799310] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.407 [2024-05-15 16:01:22.799317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.407 [2024-05-15 16:01:22.799329] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.407 [2024-05-15 16:01:22.799459] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.407 [2024-05-15 16:01:22.799466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.799470] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799475] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.799485] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799490] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.799502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.799514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.799643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.799650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.799655] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799660] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.799670] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799675] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799679] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.799686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.799697] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.799934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.799943] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.799947] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799952] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.799962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.799972] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.799979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.799990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.800212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.800219] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.800223] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.800238] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800248] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.800255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.800267] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.800396] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.800403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.800409] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800415] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.800427] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800433] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.800445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.800457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.800617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.800624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.800629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800634] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.800644] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800654] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.800660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.800672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.800840] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.800847] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.800854] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800858] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.800869] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800874] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.800879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.800886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.800897] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.801061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.801069] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.801073] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.801078] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.801088] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.801094] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.801098] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.408 [2024-05-15 16:01:22.801105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.408 [2024-05-15 16:01:22.801118] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.408 [2024-05-15 16:01:22.805198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.408 [2024-05-15 16:01:22.805211] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.408 [2024-05-15 16:01:22.805216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.805221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.408 [2024-05-15 16:01:22.805234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.805239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:24.408 [2024-05-15 16:01:22.805243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x62aca0) 00:23:24.409 [2024-05-15 16:01:22.805251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.409 [2024-05-15 16:01:22.805265] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x694da0, cid 3, qid 0 00:23:24.409 [2024-05-15 16:01:22.805590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:24.409 [2024-05-15 16:01:22.805597] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:24.409 [2024-05-15 16:01:22.805601] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:24.409 [2024-05-15 16:01:22.805606] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x694da0) on tqpair=0x62aca0 00:23:24.409 [2024-05-15 16:01:22.805615] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:24.409 0 Kelvin (-273 Celsius) 00:23:24.409 Available Spare: 0% 00:23:24.409 Available Spare Threshold: 0% 00:23:24.409 Life Percentage Used: 0% 00:23:24.409 Data Units Read: 0 00:23:24.409 Data Units Written: 0 00:23:24.409 Host Read Commands: 0 00:23:24.409 Host Write Commands: 0 00:23:24.409 Controller Busy Time: 0 minutes 00:23:24.409 Power Cycles: 0 00:23:24.409 Power On Hours: 0 hours 00:23:24.409 Unsafe Shutdowns: 0 00:23:24.409 Unrecoverable Media Errors: 0 00:23:24.409 Lifetime Error Log Entries: 0 00:23:24.409 Warning Temperature Time: 0 minutes 00:23:24.409 Critical Temperature Time: 0 minutes 00:23:24.409 00:23:24.409 Number of Queues 00:23:24.409 ================ 00:23:24.409 Number of I/O Submission Queues: 127 00:23:24.409 Number of I/O Completion Queues: 127 00:23:24.409 00:23:24.409 Active Namespaces 00:23:24.409 ================= 00:23:24.409 Namespace ID:1 00:23:24.409 Error Recovery Timeout: Unlimited 00:23:24.409 Command Set Identifier: NVM (00h) 00:23:24.409 Deallocate: Supported 00:23:24.409 Deallocated/Unwritten Error: Not Supported 00:23:24.409 Deallocated Read Value: Unknown 00:23:24.409 Deallocate in Write Zeroes: Not Supported 00:23:24.409 Deallocated Guard Field: 0xFFFF 00:23:24.409 Flush: Supported 00:23:24.409 Reservation: Supported 00:23:24.409 Namespace Sharing Capabilities: Multiple Controllers 00:23:24.409 Size (in LBAs): 131072 (0GiB) 00:23:24.409 Capacity (in LBAs): 131072 (0GiB) 00:23:24.409 Utilization (in LBAs): 131072 (0GiB) 00:23:24.409 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:24.409 EUI64: ABCDEF0123456789 00:23:24.409 UUID: 95015d80-ccd0-4474-bccd-c8a287ea8e02 00:23:24.409 Thin Provisioning: Not Supported 00:23:24.409 Per-NS Atomic Units: Yes 00:23:24.409 Atomic Boundary Size (Normal): 0 00:23:24.409 Atomic Boundary Size (PFail): 0 00:23:24.409 Atomic Boundary Offset: 0 00:23:24.409 Maximum Single Source Range Length: 65535 00:23:24.409 Maximum Copy Length: 65535 00:23:24.409 Maximum Source Range Count: 1 00:23:24.409 NGUID/EUI64 Never Reused: No 00:23:24.409 Namespace Write Protected: No 00:23:24.409 Number of LBA Formats: 1 00:23:24.409 Current LBA Format: LBA Format #00 00:23:24.409 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:24.409 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.409 rmmod nvme_tcp 00:23:24.409 rmmod nvme_fabrics 00:23:24.409 rmmod nvme_keyring 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3840621 ']' 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3840621 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3840621 ']' 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3840621 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:24.409 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3840621 00:23:24.669 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:24.669 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:24.669 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3840621' 00:23:24.669 killing process with pid 3840621 00:23:24.669 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3840621 00:23:24.669 [2024-05-15 16:01:22.983563] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:24.669 16:01:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3840621 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.669 16:01:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.207 16:01:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.207 00:23:27.207 real 0m10.183s 00:23:27.207 user 0m7.818s 00:23:27.207 sys 0m5.175s 00:23:27.207 16:01:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:27.207 16:01:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.207 ************************************ 00:23:27.207 END TEST nvmf_identify 00:23:27.207 ************************************ 00:23:27.207 16:01:25 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:27.207 16:01:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:27.207 16:01:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:27.207 16:01:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.207 ************************************ 00:23:27.207 START TEST nvmf_perf 00:23:27.207 ************************************ 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:27.207 * Looking for test storage... 00:23:27.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.207 16:01:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:33.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:33.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:33.821 Found net devices under 0000:af:00.0: cvl_0_0 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:33.821 Found net devices under 0000:af:00.1: cvl_0_1 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.821 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:23:33.822 00:23:33.822 --- 10.0.0.2 ping statistics --- 00:23:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.822 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:23:33.822 00:23:33.822 --- 10.0.0.1 ping statistics --- 00:23:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.822 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3844359 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3844359 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3844359 ']' 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.822 16:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:33.822 [2024-05-15 16:01:31.793510] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:33.822 [2024-05-15 16:01:31.793559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.822 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.822 [2024-05-15 16:01:31.865499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.822 [2024-05-15 16:01:31.938958] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.822 [2024-05-15 16:01:31.938996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.822 [2024-05-15 16:01:31.939006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.822 [2024-05-15 16:01:31.939014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.822 [2024-05-15 16:01:31.939021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.822 [2024-05-15 16:01:31.939061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.822 [2024-05-15 16:01:31.939160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.822 [2024-05-15 16:01:31.939244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.822 [2024-05-15 16:01:31.939246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.081 16:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.081 16:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:34.081 16:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.081 16:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.081 16:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:34.081 16:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.340 16:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:34.340 16:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:37.631 16:01:35 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:37.631 16:01:35 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:37.631 16:01:35 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:23:37.631 16:01:35 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:37.631 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:37.631 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:23:37.631 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:37.631 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:37.631 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.890 [2024-05-15 16:01:36.211494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.890 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.890 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:37.890 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:38.149 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:38.149 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:38.408 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:38.408 [2024-05-15 16:01:36.951386] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:38.408 [2024-05-15 16:01:36.951656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.666 16:01:36 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:38.667 16:01:37 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:23:38.667 16:01:37 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:38.667 16:01:37 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:38.667 16:01:37 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:40.045 Initializing NVMe Controllers 00:23:40.045 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:23:40.045 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:23:40.045 Initialization complete. Launching workers. 00:23:40.045 ======================================================== 00:23:40.045 Latency(us) 00:23:40.045 Device Information : IOPS MiB/s Average min max 00:23:40.045 PCIE (0000:d8:00.0) NSID 1 from core 0: 101810.85 397.70 314.01 34.87 4465.92 00:23:40.045 ======================================================== 00:23:40.045 Total : 101810.85 397.70 314.01 34.87 4465.92 00:23:40.045 00:23:40.045 16:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:40.045 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.423 Initializing NVMe Controllers 00:23:41.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:41.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:41.423 Initialization complete. Launching workers. 00:23:41.423 ======================================================== 00:23:41.424 Latency(us) 00:23:41.424 Device Information : IOPS MiB/s Average min max 00:23:41.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 66.00 0.26 15256.85 502.98 45471.90 00:23:41.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23032.15 7963.32 47898.13 00:23:41.424 ======================================================== 00:23:41.424 Total : 111.00 0.43 18409.00 502.98 47898.13 00:23:41.424 00:23:41.424 16:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.424 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.803 Initializing NVMe Controllers 00:23:42.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:42.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:42.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:42.803 Initialization complete. Launching workers. 00:23:42.803 ======================================================== 00:23:42.803 Latency(us) 00:23:42.803 Device Information : IOPS MiB/s Average min max 00:23:42.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8385.04 32.75 3836.80 761.63 42368.66 00:23:42.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3763.98 14.70 8563.79 7184.04 16723.37 00:23:42.804 ======================================================== 00:23:42.804 Total : 12149.02 47.46 5301.30 761.63 42368.66 00:23:42.804 00:23:42.804 16:01:40 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:42.804 16:01:40 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:42.804 16:01:40 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:42.804 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.341 Initializing NVMe Controllers 00:23:45.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.341 Controller IO queue size 128, less than required. 00:23:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.341 Controller IO queue size 128, less than required. 00:23:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:45.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:45.341 Initialization complete. Launching workers. 00:23:45.341 ======================================================== 00:23:45.341 Latency(us) 00:23:45.341 Device Information : IOPS MiB/s Average min max 00:23:45.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 960.34 240.09 139144.12 78018.16 230424.39 00:23:45.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.01 144.00 233840.55 79164.68 384851.65 00:23:45.341 ======================================================== 00:23:45.341 Total : 1536.35 384.09 174647.59 78018.16 384851.65 00:23:45.341 00:23:45.341 16:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:45.341 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.341 No valid NVMe controllers or AIO or URING devices found 00:23:45.341 Initializing NVMe Controllers 00:23:45.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.341 Controller IO queue size 128, less than required. 00:23:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:45.341 Controller IO queue size 128, less than required. 00:23:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:45.341 WARNING: Some requested NVMe devices were skipped 00:23:45.341 16:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:45.341 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.880 Initializing NVMe Controllers 00:23:47.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.880 Controller IO queue size 128, less than required. 00:23:47.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.880 Controller IO queue size 128, less than required. 00:23:47.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:47.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:47.880 Initialization complete. Launching workers. 00:23:47.880 00:23:47.880 ==================== 00:23:47.880 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:47.880 TCP transport: 00:23:47.880 polls: 51157 00:23:47.880 idle_polls: 17750 00:23:47.880 sock_completions: 33407 00:23:47.880 nvme_completions: 3451 00:23:47.880 submitted_requests: 5166 00:23:47.880 queued_requests: 1 00:23:47.880 00:23:47.880 ==================== 00:23:47.880 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:47.880 TCP transport: 00:23:47.880 polls: 51910 00:23:47.880 idle_polls: 16956 00:23:47.880 sock_completions: 34954 00:23:47.880 nvme_completions: 3533 00:23:47.880 submitted_requests: 5334 00:23:47.880 queued_requests: 1 00:23:47.880 ======================================================== 00:23:47.880 Latency(us) 00:23:47.880 Device Information : IOPS MiB/s Average min max 00:23:47.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 861.16 215.29 152599.24 80323.66 257527.44 00:23:47.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 881.63 220.41 148512.35 71217.92 246847.69 00:23:47.880 ======================================================== 00:23:47.880 Total : 1742.79 435.70 150531.80 71217.92 257527.44 00:23:47.880 00:23:47.880 16:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.140 rmmod nvme_tcp 00:23:48.140 rmmod nvme_fabrics 00:23:48.140 rmmod nvme_keyring 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3844359 ']' 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3844359 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3844359 ']' 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3844359 00:23:48.140 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:23:48.399 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:48.399 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3844359 00:23:48.399 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:48.399 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:48.399 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3844359' 00:23:48.399 killing process with pid 3844359 00:23:48.399 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3844359 00:23:48.399 [2024-05-15 16:01:46.753717] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:48.399 16:01:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3844359 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.939 16:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.851 16:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.851 00:23:52.851 real 0m25.578s 00:23:52.851 user 1m7.706s 00:23:52.851 sys 0m8.091s 00:23:52.851 16:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:52.851 16:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.851 ************************************ 00:23:52.851 END TEST nvmf_perf 00:23:52.851 ************************************ 00:23:52.851 16:01:50 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:52.851 16:01:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:52.851 16:01:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:52.851 16:01:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.851 ************************************ 00:23:52.851 START TEST nvmf_fio_host 00:23:52.851 ************************************ 00:23:52.851 16:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:52.851 * Looking for test storage... 00:23:52.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.851 16:01:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.851 16:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.851 16:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.851 16:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.851 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.852 16:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.421 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.421 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.421 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.421 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:59.421 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.422 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.682 16:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:23:59.682 00:23:59.682 --- 10.0.0.2 ping statistics --- 00:23:59.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.682 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:23:59.682 00:23:59.682 --- 10.0.0.1 ping statistics --- 00:23:59.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.682 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3851001 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3851001 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3851001 ']' 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:59.682 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.682 [2024-05-15 16:01:58.101366] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:59.682 [2024-05-15 16:01:58.101413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.682 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.682 [2024-05-15 16:01:58.174075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.941 [2024-05-15 16:01:58.249558] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.941 [2024-05-15 16:01:58.249593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.941 [2024-05-15 16:01:58.249603] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.941 [2024-05-15 16:01:58.249612] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.941 [2024-05-15 16:01:58.249619] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.941 [2024-05-15 16:01:58.249663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.941 [2024-05-15 16:01:58.249757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.941 [2024-05-15 16:01:58.249818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.941 [2024-05-15 16:01:58.249820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.510 [2024-05-15 16:01:58.921986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.510 Malloc1 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.510 16:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.510 [2024-05-15 16:01:59.016482] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:00.510 [2024-05-15 16:01:59.016735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.510 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:00.511 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:00.784 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:00.784 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:00.784 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:00.784 16:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:01.043 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:01.043 fio-3.35 00:24:01.043 Starting 1 thread 00:24:01.043 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.592 00:24:03.592 test: (groupid=0, jobs=1): err= 0: pid=3851416: Wed May 15 16:02:01 2024 00:24:03.592 read: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.6MiB/2004msec) 00:24:03.592 slat (nsec): min=1558, max=225283, avg=1677.56, stdev=2041.69 00:24:03.592 clat (usec): min=2951, max=16127, avg=6340.07, stdev=1608.89 00:24:03.592 lat (usec): min=2952, max=16128, avg=6341.75, stdev=1609.00 00:24:03.592 clat percentiles (usec): 00:24:03.592 | 1.00th=[ 4113], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5342], 00:24:03.592 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6128], 00:24:03.592 | 70.00th=[ 6325], 80.00th=[ 6915], 90.00th=[ 8455], 95.00th=[10028], 00:24:03.592 | 99.00th=[12256], 99.50th=[13173], 99.90th=[15533], 99.95th=[15664], 00:24:03.592 | 99.99th=[15795] 00:24:03.592 bw ( KiB/s): min=44752, max=48496, per=99.86%, avg=46754.00, stdev=1592.57, samples=4 00:24:03.592 iops : min=11188, max=12124, avg=11688.50, stdev=398.14, samples=4 00:24:03.592 write: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(91.0MiB/2004msec); 0 zone resets 00:24:03.592 slat (nsec): min=1606, max=211346, avg=1750.33, stdev=1550.26 00:24:03.592 clat (usec): min=1896, max=10424, avg=4571.75, stdev=885.20 00:24:03.592 lat (usec): min=1898, max=10426, avg=4573.50, stdev=885.37 00:24:03.592 clat percentiles (usec): 00:24:03.592 | 1.00th=[ 2704], 5.00th=[ 3195], 10.00th=[ 3523], 20.00th=[ 3949], 00:24:03.592 | 30.00th=[ 4228], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4686], 00:24:03.592 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5407], 95.00th=[ 6194], 00:24:03.592 | 99.00th=[ 7570], 99.50th=[ 8356], 99.90th=[ 9503], 99.95th=[10290], 00:24:03.592 | 99.99th=[10421] 00:24:03.592 bw ( KiB/s): min=45136, max=47240, per=100.00%, avg=46476.00, stdev=931.73, samples=4 00:24:03.592 iops : min=11284, max=11810, avg=11619.00, stdev=232.93, samples=4 00:24:03.592 lat (msec) : 2=0.01%, 4=11.00%, 10=86.39%, 20=2.60% 00:24:03.592 cpu : usr=61.56%, sys=31.65%, ctx=34, majf=0, minf=4 00:24:03.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:03.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.592 issued rwts: total=23457,23285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.592 00:24:03.592 Run status group 0 (all jobs): 00:24:03.592 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.6MiB (96.1MB), run=2004-2004msec 00:24:03.592 WRITE: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=91.0MiB (95.4MB), run=2004-2004msec 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:03.592 16:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:03.849 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:03.849 fio-3.35 00:24:03.849 Starting 1 thread 00:24:03.849 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.375 00:24:06.375 test: (groupid=0, jobs=1): err= 0: pid=3852075: Wed May 15 16:02:04 2024 00:24:06.375 read: IOPS=9102, BW=142MiB/s (149MB/s)(285MiB/2004msec) 00:24:06.375 slat (usec): min=2, max=413, avg= 2.90, stdev= 4.27 00:24:06.375 clat (usec): min=2613, max=51605, avg=8902.83, stdev=5709.66 00:24:06.375 lat (usec): min=2616, max=51612, avg=8905.73, stdev=5709.98 00:24:06.375 clat percentiles (usec): 00:24:06.375 | 1.00th=[ 3687], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5866], 00:24:06.375 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7635], 60.00th=[ 8160], 00:24:06.375 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[11863], 95.00th=[21890], 00:24:06.375 | 99.00th=[43779], 99.50th=[46924], 99.90th=[50594], 99.95th=[51119], 00:24:06.375 | 99.99th=[51119] 00:24:06.375 bw ( KiB/s): min=61984, max=85536, per=49.00%, avg=71360.00, stdev=10035.12, samples=4 00:24:06.375 iops : min= 3874, max= 5346, avg=4460.00, stdev=627.19, samples=4 00:24:06.375 write: IOPS=5668, BW=88.6MiB/s (92.9MB/s)(146MiB/1651msec); 0 zone resets 00:24:06.375 slat (usec): min=28, max=435, avg=31.00, stdev= 8.81 00:24:06.375 clat (usec): min=3478, max=52678, avg=9138.26, stdev=4317.83 00:24:06.375 lat (usec): min=3507, max=52708, avg=9169.27, stdev=4320.22 00:24:06.375 clat percentiles (usec): 00:24:06.375 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7308], 00:24:06.375 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:06.375 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[11994], 00:24:06.375 | 99.00th=[25035], 99.50th=[47449], 99.90th=[51119], 99.95th=[52167], 00:24:06.375 | 99.99th=[52691] 00:24:06.375 bw ( KiB/s): min=64576, max=89088, per=81.97%, avg=74344.00, stdev=10522.81, samples=4 00:24:06.375 iops : min= 4036, max= 5568, avg=4646.50, stdev=657.68, samples=4 00:24:06.375 lat (msec) : 4=1.27%, 10=81.90%, 20=11.70%, 50=4.95%, 100=0.18% 00:24:06.375 cpu : usr=71.44%, sys=19.32%, ctx=135, majf=0, minf=1 00:24:06.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:06.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:06.375 issued rwts: total=18241,9359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:06.375 00:24:06.375 Run status group 0 (all jobs): 00:24:06.375 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (299MB), run=2004-2004msec 00:24:06.375 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=146MiB (153MB), run=1651-1651msec 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.375 rmmod nvme_tcp 00:24:06.375 rmmod nvme_fabrics 00:24:06.375 rmmod nvme_keyring 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3851001 ']' 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3851001 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3851001 ']' 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3851001 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3851001 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3851001' 00:24:06.375 killing process with pid 3851001 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3851001 00:24:06.375 [2024-05-15 16:02:04.759023] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:06.375 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3851001 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.634 16:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.534 16:02:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.534 00:24:08.534 real 0m16.008s 00:24:08.534 user 0m47.105s 00:24:08.534 sys 0m7.655s 00:24:08.534 16:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:08.534 16:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.534 ************************************ 00:24:08.534 END TEST nvmf_fio_host 00:24:08.534 ************************************ 00:24:08.792 16:02:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:08.792 16:02:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:08.792 16:02:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:08.792 16:02:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:08.792 ************************************ 00:24:08.792 START TEST nvmf_failover 00:24:08.792 ************************************ 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:08.792 * Looking for test storage... 00:24:08.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.792 16:02:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:15.373 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:15.373 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:15.373 Found net devices under 0000:af:00.0: cvl_0_0 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:15.373 Found net devices under 0000:af:00.1: cvl_0_1 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.373 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.630 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.630 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.630 16:02:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:15.630 00:24:15.630 --- 10.0.0.2 ping statistics --- 00:24:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.630 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:24:15.630 00:24:15.630 --- 10.0.0.1 ping statistics --- 00:24:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.630 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3856042 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3856042 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3856042 ']' 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:15.630 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:15.630 [2024-05-15 16:02:14.101142] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:15.630 [2024-05-15 16:02:14.101196] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.630 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.630 [2024-05-15 16:02:14.172315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:15.888 [2024-05-15 16:02:14.246381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.888 [2024-05-15 16:02:14.246416] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.888 [2024-05-15 16:02:14.246425] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.888 [2024-05-15 16:02:14.246434] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.888 [2024-05-15 16:02:14.246441] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.888 [2024-05-15 16:02:14.246543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.888 [2024-05-15 16:02:14.246627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.888 [2024-05-15 16:02:14.246629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.452 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:16.452 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:16.452 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.452 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.452 16:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.452 16:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.452 16:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:16.709 [2024-05-15 16:02:15.098070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.709 16:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:16.967 Malloc0 00:24:16.967 16:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.967 16:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.224 16:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.482 [2024-05-15 16:02:15.836244] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:17.482 [2024-05-15 16:02:15.836472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.482 16:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:17.482 [2024-05-15 16:02:16.012957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:17.482 16:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:17.739 [2024-05-15 16:02:16.225665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3856413 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3856413 /var/tmp/bdevperf.sock 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3856413 ']' 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:17.739 16:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.740 16:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:17.740 16:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.672 16:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:18.672 16:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:18.672 16:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:18.930 NVMe0n1 00:24:18.930 16:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:19.188 00:24:19.188 16:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3856622 00:24:19.188 16:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.188 16:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:20.563 16:02:18 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.563 [2024-05-15 16:02:18.880521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 [2024-05-15 16:02:18.880686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(5) to be set 00:24:20.563 16:02:18 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:23.843 16:02:21 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.843 00:24:23.843 16:02:22 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:23.843 [2024-05-15 16:02:22.339274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.843 [2024-05-15 16:02:22.339465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.339992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.844 [2024-05-15 16:02:22.340211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 [2024-05-15 16:02:22.340431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aab80 is same with the state(5) to be set 00:24:23.845 16:02:22 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:27.121 16:02:25 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.121 [2024-05-15 16:02:25.536642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.121 16:02:25 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:28.053 16:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:28.310 [2024-05-15 16:02:26.735496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.310 [2024-05-15 16:02:26.735640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.735991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 [2024-05-15 16:02:26.736189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301800 is same with the state(5) to be set 00:24:28.311 16:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3856622 00:24:34.910 0 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3856413 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3856413 ']' 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3856413 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3856413 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3856413' 00:24:34.910 killing process with pid 3856413 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3856413 00:24:34.910 16:02:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3856413 00:24:34.910 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:34.910 [2024-05-15 16:02:16.298510] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:34.910 [2024-05-15 16:02:16.298572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856413 ] 00:24:34.910 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.910 [2024-05-15 16:02:16.369331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.910 [2024-05-15 16:02:16.441935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.910 Running I/O for 15 seconds... 00:24:34.910 [2024-05-15 16:02:18.881083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.910 [2024-05-15 16:02:18.881121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.910 [2024-05-15 16:02:18.881143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.910 [2024-05-15 16:02:18.881163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.910 [2024-05-15 16:02:18.881182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857590 is same with the state(5) to be set 00:24:34.910 [2024-05-15 16:02:18.881243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.910 [2024-05-15 16:02:18.881254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.910 [2024-05-15 16:02:18.881619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.910 [2024-05-15 16:02:18.881631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.881988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.881998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.882018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.882038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.911 [2024-05-15 16:02:18.882057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.911 [2024-05-15 16:02:18.882460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.911 [2024-05-15 16:02:18.882470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.882877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.882896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.882917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.882937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.882956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.882975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.882986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.882995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.883017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.883036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.912 [2024-05-15 16:02:18.883197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.883216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.883235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.912 [2024-05-15 16:02:18.883255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.912 [2024-05-15 16:02:18.883266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:18.883517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.913 [2024-05-15 16:02:18.883791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.913 [2024-05-15 16:02:18.883819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.913 [2024-05-15 16:02:18.883827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99808 len:8 PRP1 0x0 PRP2 0x0 00:24:34.913 [2024-05-15 16:02:18.883836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:18.883880] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8785b0 was disconnected and freed. reset controller. 00:24:34.913 [2024-05-15 16:02:18.883897] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:34.913 [2024-05-15 16:02:18.883907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.913 [2024-05-15 16:02:18.886619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.913 [2024-05-15 16:02:18.886649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857590 (9): Bad file descriptor 00:24:34.913 [2024-05-15 16:02:19.045722] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.913 [2024-05-15 16:02:22.341498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.913 [2024-05-15 16:02:22.341717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.913 [2024-05-15 16:02:22.341726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.341988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.341998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.914 [2024-05-15 16:02:22.342408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.914 [2024-05-15 16:02:22.342417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.342987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.342996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.915 [2024-05-15 16:02:22.343238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.915 [2024-05-15 16:02:22.343249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.916 [2024-05-15 16:02:22.343734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.343982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.343992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.344002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.344011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.344021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.344030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.344040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.916 [2024-05-15 16:02:22.344050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.916 [2024-05-15 16:02:22.344060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:22.344069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:22.344079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa20f40 is same with the state(5) to be set 00:24:34.917 [2024-05-15 16:02:22.344091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.917 [2024-05-15 16:02:22.344099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.917 [2024-05-15 16:02:22.344108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68736 len:8 PRP1 0x0 PRP2 0x0 00:24:34.917 [2024-05-15 16:02:22.344117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:22.344162] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa20f40 was disconnected and freed. reset controller. 00:24:34.917 [2024-05-15 16:02:22.344173] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:34.917 [2024-05-15 16:02:22.344197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:22.344207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:22.344217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:22.344226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:22.344236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:22.344245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:22.344254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:22.344263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:22.344273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.917 [2024-05-15 16:02:22.344296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857590 (9): Bad file descriptor 00:24:34.917 [2024-05-15 16:02:22.347014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.917 [2024-05-15 16:02:22.422266] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.917 [2024-05-15 16:02:26.735595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:26.735634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.735647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:26.735657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.735667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:26.735677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.735689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.917 [2024-05-15 16:02:26.735699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.735708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857590 is same with the state(5) to be set 00:24:34.917 [2024-05-15 16:02:26.737145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.917 [2024-05-15 16:02:26.737543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.917 [2024-05-15 16:02:26.737702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.917 [2024-05-15 16:02:26.737712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.737987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.737996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.918 [2024-05-15 16:02:26.738428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.918 [2024-05-15 16:02:26.738437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.919 [2024-05-15 16:02:26.738673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.738989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.738999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.919 [2024-05-15 16:02:26.739239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.919 [2024-05-15 16:02:26.739248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.920 [2024-05-15 16:02:26.739268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.920 [2024-05-15 16:02:26.739287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.920 [2024-05-15 16:02:26.739306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.920 [2024-05-15 16:02:26.739329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.920 [2024-05-15 16:02:26.739348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.920 [2024-05-15 16:02:26.739367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.920 [2024-05-15 16:02:26.739388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112624 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112632 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112640 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112648 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112656 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112664 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112672 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112680 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112688 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112696 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112704 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112712 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112720 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.739835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.739842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.739849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112728 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.739858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.751630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.751645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.751657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112736 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.751669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.751682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.920 [2024-05-15 16:02:26.751691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.920 [2024-05-15 16:02:26.751702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112744 len:8 PRP1 0x0 PRP2 0x0 00:24:34.920 [2024-05-15 16:02:26.751714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.920 [2024-05-15 16:02:26.751766] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa20d30 was disconnected and freed. reset controller. 00:24:34.920 [2024-05-15 16:02:26.751781] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:34.920 [2024-05-15 16:02:26.751795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.920 [2024-05-15 16:02:26.751839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857590 (9): Bad file descriptor 00:24:34.920 [2024-05-15 16:02:26.755457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.920 [2024-05-15 16:02:26.915371] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.920 00:24:34.920 Latency(us) 00:24:34.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.920 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:34.920 Verification LBA range: start 0x0 length 0x4000 00:24:34.920 NVMe0n1 : 15.01 11349.35 44.33 1273.12 0.00 10118.32 1232.08 24956.11 00:24:34.920 =================================================================================================================== 00:24:34.920 Total : 11349.35 44.33 1273.12 0.00 10118.32 1232.08 24956.11 00:24:34.920 Received shutdown signal, test time was about 15.000000 seconds 00:24:34.920 00:24:34.920 Latency(us) 00:24:34.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.920 =================================================================================================================== 00:24:34.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3859258 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3859258 /var/tmp/bdevperf.sock 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3859258 ']' 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:34.921 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:35.485 16:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:35.485 16:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.743 [2024-05-15 16:02:34.158230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.743 16:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:36.000 [2024-05-15 16:02:34.342735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:36.000 16:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.257 NVMe0n1 00:24:36.257 16:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.514 00:24:36.514 16:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.772 00:24:36.772 16:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.772 16:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:37.030 16:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.030 16:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:40.308 16:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.308 16:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:40.308 16:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3860222 00:24:40.308 16:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.308 16:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3860222 00:24:41.678 0 00:24:41.678 16:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.678 [2024-05-15 16:02:33.184567] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:41.678 [2024-05-15 16:02:33.184619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859258 ] 00:24:41.678 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.678 [2024-05-15 16:02:33.253360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.678 [2024-05-15 16:02:33.318298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.678 [2024-05-15 16:02:35.511217] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:41.678 [2024-05-15 16:02:35.511266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.678 [2024-05-15 16:02:35.511280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.678 [2024-05-15 16:02:35.511291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.678 [2024-05-15 16:02:35.511301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.678 [2024-05-15 16:02:35.511311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.678 [2024-05-15 16:02:35.511320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.678 [2024-05-15 16:02:35.511330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.678 [2024-05-15 16:02:35.511339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.678 [2024-05-15 16:02:35.511348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.678 [2024-05-15 16:02:35.511369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.678 [2024-05-15 16:02:35.511385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe66590 (9): Bad file descriptor 00:24:41.678 [2024-05-15 16:02:35.563448] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.678 Running I/O for 1 seconds... 00:24:41.678 00:24:41.678 Latency(us) 00:24:41.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.678 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:41.678 Verification LBA range: start 0x0 length 0x4000 00:24:41.678 NVMe0n1 : 1.01 11064.65 43.22 0.00 0.00 11522.03 2398.62 21600.67 00:24:41.678 =================================================================================================================== 00:24:41.678 Total : 11064.65 43.22 0.00 0.00 11522.03 2398.62 21600.67 00:24:41.678 16:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.678 16:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:41.678 16:02:40 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.935 16:02:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.935 16:02:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:41.935 16:02:40 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.192 16:02:40 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3859258 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3859258 ']' 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3859258 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859258 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859258' 00:24:45.479 killing process with pid 3859258 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3859258 00:24:45.479 16:02:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3859258 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.737 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.737 rmmod nvme_tcp 00:24:45.995 rmmod nvme_fabrics 00:24:45.995 rmmod nvme_keyring 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3856042 ']' 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3856042 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3856042 ']' 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3856042 00:24:45.995 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:45.996 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:45.996 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3856042 00:24:45.996 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:45.996 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:45.996 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3856042' 00:24:45.996 killing process with pid 3856042 00:24:45.996 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3856042 00:24:45.996 [2024-05-15 16:02:44.422379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:45.996 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3856042 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.254 16:02:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.157 16:02:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.416 00:24:48.416 real 0m39.572s 00:24:48.416 user 2m2.153s 00:24:48.416 sys 0m9.917s 00:24:48.416 16:02:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:48.416 16:02:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.416 ************************************ 00:24:48.416 END TEST nvmf_failover 00:24:48.416 ************************************ 00:24:48.416 16:02:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:48.416 16:02:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:48.416 16:02:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:48.416 16:02:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.416 ************************************ 00:24:48.416 START TEST nvmf_host_discovery 00:24:48.416 ************************************ 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:48.416 * Looking for test storage... 00:24:48.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.416 16:02:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:55.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:55.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:55.021 Found net devices under 0000:af:00.0: cvl_0_0 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:55.021 Found net devices under 0000:af:00.1: cvl_0_1 00:24:55.021 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.022 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:24:55.280 00:24:55.280 --- 10.0.0.2 ping statistics --- 00:24:55.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.280 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:24:55.280 00:24:55.280 --- 10.0.0.1 ping statistics --- 00:24:55.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.280 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3864825 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:55.280 16:02:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3864825 00:24:55.281 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3864825 ']' 00:24:55.281 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.281 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:55.281 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.281 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:55.281 16:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.281 [2024-05-15 16:02:53.811593] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:55.281 [2024-05-15 16:02:53.811638] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.539 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.539 [2024-05-15 16:02:53.883731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.539 [2024-05-15 16:02:53.956231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.539 [2024-05-15 16:02:53.956265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.539 [2024-05-15 16:02:53.956275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.539 [2024-05-15 16:02:53.956283] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.539 [2024-05-15 16:02:53.956306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.539 [2024-05-15 16:02:53.956331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.104 [2024-05-15 16:02:54.654590] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.104 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.104 [2024-05-15 16:02:54.666588] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:56.104 [2024-05-15 16:02:54.666786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:56.362 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.362 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:56.362 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.362 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.362 null0 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.363 null1 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3865007 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3865007 /tmp/host.sock 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3865007 ']' 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:56.363 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:56.363 16:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.363 [2024-05-15 16:02:54.745169] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:56.363 [2024-05-15 16:02:54.745221] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865007 ] 00:24:56.363 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.363 [2024-05-15 16:02:54.814548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.363 [2024-05-15 16:02:54.889985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.295 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.554 [2024-05-15 16:02:55.885944] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.554 16:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:24:57.554 16:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:24:58.120 [2024-05-15 16:02:56.564595] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:58.120 [2024-05-15 16:02:56.564620] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:58.120 [2024-05-15 16:02:56.564635] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:58.378 [2024-05-15 16:02:56.694026] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:58.378 [2024-05-15 16:02:56.755545] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:58.378 [2024-05-15 16:02:56.755564] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.635 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:58.892 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.893 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.150 [2024-05-15 16:02:57.578535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:59.150 [2024-05-15 16:02:57.579707] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:59.150 [2024-05-15 16:02:57.579729] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:59.150 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.150 [2024-05-15 16:02:57.708407] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:59.407 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:59.407 16:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:24:59.664 [2024-05-15 16:02:57.972831] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:59.664 [2024-05-15 16:02:57.972851] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:59.664 [2024-05-15 16:02:57.972858] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:00.229 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.230 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.489 [2024-05-15 16:02:58.830944] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:00.489 [2024-05-15 16:02:58.830965] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.489 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:00.489 [2024-05-15 16:02:58.836759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.490 [2024-05-15 16:02:58.836779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.490 [2024-05-15 16:02:58.836790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.490 [2024-05-15 16:02:58.836815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.490 [2024-05-15 16:02:58.836826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.490 [2024-05-15 16:02:58.836835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.490 [2024-05-15 16:02:58.836844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.490 [2024-05-15 16:02:58.836854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.490 [2024-05-15 16:02:58.836863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.490 [2024-05-15 16:02:58.846773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.856813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.490 [2024-05-15 16:02:58.857248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.857562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.857575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1d1f0 with addr=10.0.0.2, port=4420 00:25:00.490 [2024-05-15 16:02:58.857586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 [2024-05-15 16:02:58.857600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.857622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.490 [2024-05-15 16:02:58.857632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.490 [2024-05-15 16:02:58.857643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.490 [2024-05-15 16:02:58.857656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.490 [2024-05-15 16:02:58.866870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.490 [2024-05-15 16:02:58.867367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.867697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.867709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1d1f0 with addr=10.0.0.2, port=4420 00:25:00.490 [2024-05-15 16:02:58.867719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 [2024-05-15 16:02:58.867732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.867758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.490 [2024-05-15 16:02:58.867768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.490 [2024-05-15 16:02:58.867777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.490 [2024-05-15 16:02:58.867789] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.490 [2024-05-15 16:02:58.876923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.490 [2024-05-15 16:02:58.877324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.877732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.877744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1d1f0 with addr=10.0.0.2, port=4420 00:25:00.490 [2024-05-15 16:02:58.877754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 [2024-05-15 16:02:58.877767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.877779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.490 [2024-05-15 16:02:58.877788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.490 [2024-05-15 16:02:58.877801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.490 [2024-05-15 16:02:58.877826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.490 [2024-05-15 16:02:58.886978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.490 [2024-05-15 16:02:58.887397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.887787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.887800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1d1f0 with addr=10.0.0.2, port=4420 00:25:00.490 [2024-05-15 16:02:58.887810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 [2024-05-15 16:02:58.887822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.887847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.490 [2024-05-15 16:02:58.887857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.490 [2024-05-15 16:02:58.887866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.490 [2024-05-15 16:02:58.887877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.490 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.490 [2024-05-15 16:02:58.897030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.490 [2024-05-15 16:02:58.897432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.897891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.897903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1d1f0 with addr=10.0.0.2, port=4420 00:25:00.490 [2024-05-15 16:02:58.897913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 [2024-05-15 16:02:58.897926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.897945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.490 [2024-05-15 16:02:58.897954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.490 [2024-05-15 16:02:58.897963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.490 [2024-05-15 16:02:58.897978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.490 [2024-05-15 16:02:58.907083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.490 [2024-05-15 16:02:58.907565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.908043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.908056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1d1f0 with addr=10.0.0.2, port=4420 00:25:00.490 [2024-05-15 16:02:58.908066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 [2024-05-15 16:02:58.908080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.908107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.490 [2024-05-15 16:02:58.908117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.490 [2024-05-15 16:02:58.908126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.490 [2024-05-15 16:02:58.908137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.490 [2024-05-15 16:02:58.917138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.490 [2024-05-15 16:02:58.917660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.918150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.490 [2024-05-15 16:02:58.918163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1d1f0 with addr=10.0.0.2, port=4420 00:25:00.490 [2024-05-15 16:02:58.918172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1d1f0 is same with the state(5) to be set 00:25:00.490 [2024-05-15 16:02:58.918186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d1f0 (9): Bad file descriptor 00:25:00.490 [2024-05-15 16:02:58.918208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.490 [2024-05-15 16:02:58.918218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.491 [2024-05-15 16:02:58.918227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.491 [2024-05-15 16:02:58.918238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.491 [2024-05-15 16:02:58.919214] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:00.491 [2024-05-15 16:02:58.919230] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:00.491 16:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.491 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.749 16:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.680 [2024-05-15 16:03:00.223700] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:01.680 [2024-05-15 16:03:00.223720] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:01.680 [2024-05-15 16:03:00.223734] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:01.937 [2024-05-15 16:03:00.310002] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:02.194 [2024-05-15 16:03:00.572234] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:02.194 [2024-05-15 16:03:00.572265] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.194 request: 00:25:02.194 { 00:25:02.194 "name": "nvme", 00:25:02.194 "trtype": "tcp", 00:25:02.194 "traddr": "10.0.0.2", 00:25:02.194 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:02.194 "adrfam": "ipv4", 00:25:02.194 "trsvcid": "8009", 00:25:02.194 "wait_for_attach": true, 00:25:02.194 "method": "bdev_nvme_start_discovery", 00:25:02.194 "req_id": 1 00:25:02.194 } 00:25:02.194 Got JSON-RPC error response 00:25:02.194 response: 00:25:02.194 { 00:25:02.194 "code": -17, 00:25:02.194 "message": "File exists" 00:25:02.194 } 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.194 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.195 request: 00:25:02.195 { 00:25:02.195 "name": "nvme_second", 00:25:02.195 "trtype": "tcp", 00:25:02.195 "traddr": "10.0.0.2", 00:25:02.195 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:02.195 "adrfam": "ipv4", 00:25:02.195 "trsvcid": "8009", 00:25:02.195 "wait_for_attach": true, 00:25:02.195 "method": "bdev_nvme_start_discovery", 00:25:02.195 "req_id": 1 00:25:02.195 } 00:25:02.195 Got JSON-RPC error response 00:25:02.195 response: 00:25:02.195 { 00:25:02.195 "code": -17, 00:25:02.195 "message": "File exists" 00:25:02.195 } 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:02.195 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.452 16:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.382 [2024-05-15 16:03:01.839970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.382 [2024-05-15 16:03:01.840451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.382 [2024-05-15 16:03:01.840466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a15170 with addr=10.0.0.2, port=8010 00:25:03.382 [2024-05-15 16:03:01.840485] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:03.382 [2024-05-15 16:03:01.840494] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:03.382 [2024-05-15 16:03:01.840503] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:04.310 [2024-05-15 16:03:02.842484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.310 [2024-05-15 16:03:02.842896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.310 [2024-05-15 16:03:02.842909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a15170 with addr=10.0.0.2, port=8010 00:25:04.310 [2024-05-15 16:03:02.842925] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:04.310 [2024-05-15 16:03:02.842934] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:04.310 [2024-05-15 16:03:02.842942] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:05.678 [2024-05-15 16:03:03.844455] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:05.678 request: 00:25:05.678 { 00:25:05.678 "name": "nvme_second", 00:25:05.678 "trtype": "tcp", 00:25:05.678 "traddr": "10.0.0.2", 00:25:05.678 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:05.678 "adrfam": "ipv4", 00:25:05.678 "trsvcid": "8010", 00:25:05.678 "attach_timeout_ms": 3000, 00:25:05.678 "method": "bdev_nvme_start_discovery", 00:25:05.678 "req_id": 1 00:25:05.678 } 00:25:05.678 Got JSON-RPC error response 00:25:05.678 response: 00:25:05.678 { 00:25:05.678 "code": -110, 00:25:05.678 "message": "Connection timed out" 00:25:05.678 } 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3865007 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.678 rmmod nvme_tcp 00:25:05.678 rmmod nvme_fabrics 00:25:05.678 rmmod nvme_keyring 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3864825 ']' 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3864825 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3864825 ']' 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3864825 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:05.678 16:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3864825 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3864825' 00:25:05.678 killing process with pid 3864825 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3864825 00:25:05.678 [2024-05-15 16:03:04.016128] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3864825 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.678 16:03:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:08.207 00:25:08.207 real 0m19.486s 00:25:08.207 user 0m22.769s 00:25:08.207 sys 0m7.239s 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.207 ************************************ 00:25:08.207 END TEST nvmf_host_discovery 00:25:08.207 ************************************ 00:25:08.207 16:03:06 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:08.207 16:03:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:08.207 16:03:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:08.207 16:03:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.207 ************************************ 00:25:08.207 START TEST nvmf_host_multipath_status 00:25:08.207 ************************************ 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:08.207 * Looking for test storage... 00:25:08.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:08.207 16:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:14.815 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:14.815 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.815 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:14.816 Found net devices under 0000:af:00.0: cvl_0_0 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:14.816 Found net devices under 0000:af:00.1: cvl_0_1 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:25:14.816 00:25:14.816 --- 10.0.0.2 ping statistics --- 00:25:14.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.816 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:14.816 00:25:14.816 --- 10.0.0.1 ping statistics --- 00:25:14.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.816 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.816 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3870876 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3870876 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3870876 ']' 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:15.074 16:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.074 [2024-05-15 16:03:13.464315] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:15.074 [2024-05-15 16:03:13.464364] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.074 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.075 [2024-05-15 16:03:13.538826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:15.075 [2024-05-15 16:03:13.613098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.075 [2024-05-15 16:03:13.613132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.075 [2024-05-15 16:03:13.613142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.075 [2024-05-15 16:03:13.613151] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.075 [2024-05-15 16:03:13.613158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.075 [2024-05-15 16:03:13.613212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.075 [2024-05-15 16:03:13.613215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3870876 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:16.007 [2024-05-15 16:03:14.461753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.007 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:16.265 Malloc0 00:25:16.265 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:16.522 16:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.522 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.780 [2024-05-15 16:03:15.155141] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:16.780 [2024-05-15 16:03:15.155389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.780 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.780 [2024-05-15 16:03:15.331783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3871303 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3871303 /var/tmp/bdevperf.sock 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3871303 ']' 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:17.038 16:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.970 16:03:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:17.970 16:03:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:17.970 16:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:17.970 16:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:18.228 Nvme0n1 00:25:18.228 16:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:18.486 Nvme0n1 00:25:18.486 16:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:18.486 16:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:21.012 16:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:21.012 16:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:21.012 16:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.012 16:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:21.945 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:21.945 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:21.945 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.945 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.203 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.461 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.461 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.461 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.461 16:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.719 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.977 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.977 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:22.977 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:23.235 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:23.493 16:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:24.426 16:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:24.426 16:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:24.426 16:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.426 16:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.683 16:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.683 16:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:24.683 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.683 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.683 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.683 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.683 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.683 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.940 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.940 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.940 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.940 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.198 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.456 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.456 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:25.456 16:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.714 16:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:25.971 16:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:26.906 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:26.906 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:26.906 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.906 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.164 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.421 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.421 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.421 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.421 16:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.680 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.044 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.044 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:28.044 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.044 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:28.302 16:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:29.235 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:29.235 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:29.235 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.235 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.493 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.493 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.493 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.493 16:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.751 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.009 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.009 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.009 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.009 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.266 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.266 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:30.266 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.267 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.524 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.524 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:30.524 16:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:30.524 16:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:30.781 16:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:31.710 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:31.710 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:31.710 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.710 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.967 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.967 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:31.967 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.967 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.224 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.482 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.482 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:32.482 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.482 16:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.739 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.739 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:32.739 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.739 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.739 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.739 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:32.739 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:32.996 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.253 16:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:34.183 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:34.183 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.183 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.183 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.441 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.441 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.441 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.441 16:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.699 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:34.956 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.956 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:34.956 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:34.956 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.212 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.212 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.212 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.212 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.212 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.213 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:35.469 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:35.469 16:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:35.726 16:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:35.984 16:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:36.916 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:36.916 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:36.916 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.916 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.174 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.431 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.431 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.431 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.431 16:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.688 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.688 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:37.688 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.688 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.945 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.945 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:37.945 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.946 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.946 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.946 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:37.946 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.203 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:38.460 16:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:39.390 16:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:39.390 16:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:39.390 16:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.390 16:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.648 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.648 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:39.648 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.648 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.905 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.163 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.163 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.163 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.163 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.421 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.421 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.421 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.421 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.421 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.421 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:40.421 16:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:40.679 16:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:40.937 16:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:41.897 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:41.897 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:41.897 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.897 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.155 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.413 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.413 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.413 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.413 16:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.673 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.931 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.931 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:42.931 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:43.189 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:43.446 16:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:44.379 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:44.379 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.379 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.379 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.636 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.636 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:44.636 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.636 16:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.636 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.636 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.636 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.636 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.893 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.893 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.893 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.893 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.150 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3871303 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3871303 ']' 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3871303 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3871303 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3871303' 00:25:45.408 killing process with pid 3871303 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3871303 00:25:45.408 16:03:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3871303 00:25:45.670 Connection closed with partial response: 00:25:45.670 00:25:45.670 00:25:45.670 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3871303 00:25:45.670 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.670 [2024-05-15 16:03:15.395975] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:45.670 [2024-05-15 16:03:15.396032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871303 ] 00:25:45.670 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.670 [2024-05-15 16:03:15.463658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.670 [2024-05-15 16:03:15.534186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.670 Running I/O for 90 seconds... 00:25:45.670 [2024-05-15 16:03:29.017619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.670 [2024-05-15 16:03:29.017855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.017879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.017902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.017931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.017955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.017979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.017994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.670 [2024-05-15 16:03:29.018483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:45.670 [2024-05-15 16:03:29.018499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.018509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.018524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.018533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.018552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.018562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.018578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.018587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.018602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.018612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.018627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.018636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.018652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.018661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.671 [2024-05-15 16:03:29.019907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.671 [2024-05-15 16:03:29.019934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:45.671 [2024-05-15 16:03:29.019952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.019961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.019978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.019988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.672 [2024-05-15 16:03:29.020558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.020981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.020991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.021010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.021020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.021041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.021051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.021071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.021081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.021101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.021110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.021130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.021140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.021159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.672 [2024-05-15 16:03:29.021170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:45.672 [2024-05-15 16:03:29.021195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:29.021205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:29.021226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:29.021236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:29.021255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:29.021265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:29.021285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:29.021294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:29.021314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:29.021324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:29.021344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:29.021353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:29.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:29.021383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.739990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.739999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:45.673 [2024-05-15 16:03:41.740763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.673 [2024-05-15 16:03:41.740773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.740985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.740994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.674 [2024-05-15 16:03:41.741646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.674 [2024-05-15 16:03:41.741783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.674 [2024-05-15 16:03:41.741793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:45.675 [2024-05-15 16:03:41.741808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.675 [2024-05-15 16:03:41.741817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:45.675 Received shutdown signal, test time was about 26.863776 seconds 00:25:45.675 00:25:45.675 Latency(us) 00:25:45.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.675 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:45.675 Verification LBA range: start 0x0 length 0x4000 00:25:45.675 Nvme0n1 : 26.86 10973.44 42.87 0.00 0.00 11629.93 619.32 3019898.88 00:25:45.675 =================================================================================================================== 00:25:45.675 Total : 10973.44 42.87 0.00 0.00 11629.93 619.32 3019898.88 00:25:45.675 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:45.933 rmmod nvme_tcp 00:25:45.933 rmmod nvme_fabrics 00:25:45.933 rmmod nvme_keyring 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3870876 ']' 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3870876 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3870876 ']' 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3870876 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3870876 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3870876' 00:25:45.933 killing process with pid 3870876 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3870876 00:25:45.933 [2024-05-15 16:03:44.484332] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:45.933 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3870876 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.192 16:03:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.724 16:03:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:48.724 00:25:48.724 real 0m40.414s 00:25:48.724 user 1m42.310s 00:25:48.724 sys 0m14.562s 00:25:48.724 16:03:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:48.724 16:03:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.724 ************************************ 00:25:48.724 END TEST nvmf_host_multipath_status 00:25:48.724 ************************************ 00:25:48.724 16:03:46 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:48.724 16:03:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:48.724 16:03:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:48.724 16:03:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.724 ************************************ 00:25:48.724 START TEST nvmf_discovery_remove_ifc 00:25:48.724 ************************************ 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:48.724 * Looking for test storage... 00:25:48.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.724 16:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.724 16:03:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.284 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:55.285 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:55.285 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:55.285 Found net devices under 0000:af:00.0: cvl_0_0 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:55.285 Found net devices under 0000:af:00.1: cvl_0_1 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:25:55.285 00:25:55.285 --- 10.0.0.2 ping statistics --- 00:25:55.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.285 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:25:55.285 00:25:55.285 --- 10.0.0.1 ping statistics --- 00:25:55.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.285 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3880077 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3880077 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3880077 ']' 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.285 16:03:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.285 [2024-05-15 16:03:53.653860] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:55.285 [2024-05-15 16:03:53.653908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.285 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.285 [2024-05-15 16:03:53.727165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.285 [2024-05-15 16:03:53.802697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.285 [2024-05-15 16:03:53.802734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.286 [2024-05-15 16:03:53.802744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.286 [2024-05-15 16:03:53.802752] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.286 [2024-05-15 16:03:53.802775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.286 [2024-05-15 16:03:53.802801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.220 [2024-05-15 16:03:54.509224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.220 [2024-05-15 16:03:54.517209] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:56.220 [2024-05-15 16:03:54.517385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:56.220 null0 00:25:56.220 [2024-05-15 16:03:54.549382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3880117 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3880117 /tmp/host.sock 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3880117 ']' 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:56.220 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:56.220 16:03:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.220 [2024-05-15 16:03:54.617766] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:56.220 [2024-05-15 16:03:54.617817] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880117 ] 00:25:56.220 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.220 [2024-05-15 16:03:54.686409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.220 [2024-05-15 16:03:54.762217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.155 16:03:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.089 [2024-05-15 16:03:56.517962] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:58.089 [2024-05-15 16:03:56.517991] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:58.089 [2024-05-15 16:03:56.518006] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.089 [2024-05-15 16:03:56.606261] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:58.347 [2024-05-15 16:03:56.751033] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:58.347 [2024-05-15 16:03:56.751080] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:58.347 [2024-05-15 16:03:56.751104] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:58.347 [2024-05-15 16:03:56.751122] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.347 [2024-05-15 16:03:56.751144] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.347 [2024-05-15 16:03:56.757892] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f9e860 was disconnected and freed. delete nvme_qpair. 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:58.347 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:58.604 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:58.604 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.604 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.604 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.604 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.604 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.605 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.605 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.605 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.605 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.605 16:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.567 16:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.567 16:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.567 16:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.567 16:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.567 16:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.567 16:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.567 16:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.567 16:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.567 16:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.567 16:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.508 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.765 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.765 16:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.698 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.698 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.698 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.699 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.699 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.699 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.699 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.699 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.699 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.699 16:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.631 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.889 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.889 16:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.822 [2024-05-15 16:04:02.191896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:03.822 [2024-05-15 16:04:02.191939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.822 [2024-05-15 16:04:02.191953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.822 [2024-05-15 16:04:02.191966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.822 [2024-05-15 16:04:02.191975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.822 [2024-05-15 16:04:02.191984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.822 [2024-05-15 16:04:02.191994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.822 [2024-05-15 16:04:02.192003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.822 [2024-05-15 16:04:02.192016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.822 [2024-05-15 16:04:02.192026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.822 [2024-05-15 16:04:02.192035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.822 [2024-05-15 16:04:02.192044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f65990 is same with the state(5) to be set 00:26:03.822 [2024-05-15 16:04:02.201917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f65990 (9): Bad file descriptor 00:26:03.822 [2024-05-15 16:04:02.211957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.822 16:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.822 16:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.822 16:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.822 16:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.822 16:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.822 16:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.822 16:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.754 [2024-05-15 16:04:03.271271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:06.125 [2024-05-15 16:04:04.295208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:06.125 [2024-05-15 16:04:04.295249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f65990 with addr=10.0.0.2, port=4420 00:26:06.125 [2024-05-15 16:04:04.295267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f65990 is same with the state(5) to be set 00:26:06.125 [2024-05-15 16:04:04.295636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f65990 (9): Bad file descriptor 00:26:06.125 [2024-05-15 16:04:04.295665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.125 [2024-05-15 16:04:04.295690] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:06.125 [2024-05-15 16:04:04.295717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.125 [2024-05-15 16:04:04.295732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-05-15 16:04:04.295747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.125 [2024-05-15 16:04:04.295760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.125 [2024-05-15 16:04:04.295773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.126 [2024-05-15 16:04:04.295786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.126 [2024-05-15 16:04:04.295799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.126 [2024-05-15 16:04:04.295812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.126 [2024-05-15 16:04:04.295825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.126 [2024-05-15 16:04:04.295838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.126 [2024-05-15 16:04:04.295855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:06.126 [2024-05-15 16:04:04.296295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f64e20 (9): Bad file descriptor 00:26:06.126 [2024-05-15 16:04:04.297311] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:06.126 [2024-05-15 16:04:04.297328] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:06.126 16:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.126 16:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.126 16:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:07.059 16:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.990 [2024-05-15 16:04:06.354183] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:07.990 [2024-05-15 16:04:06.354205] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:07.990 [2024-05-15 16:04:06.354219] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:07.990 [2024-05-15 16:04:06.482614] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.990 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.247 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:08.247 16:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.247 [2024-05-15 16:04:06.665706] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:08.247 [2024-05-15 16:04:06.665740] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:08.247 [2024-05-15 16:04:06.665759] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:08.247 [2024-05-15 16:04:06.665774] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:08.247 [2024-05-15 16:04:06.665783] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:08.247 [2024-05-15 16:04:06.673079] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1fa8e40 was disconnected and freed. delete nvme_qpair. 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3880117 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3880117 ']' 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3880117 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3880117 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3880117' 00:26:09.180 killing process with pid 3880117 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3880117 00:26:09.180 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3880117 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:09.438 rmmod nvme_tcp 00:26:09.438 rmmod nvme_fabrics 00:26:09.438 rmmod nvme_keyring 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3880077 ']' 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3880077 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3880077 ']' 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3880077 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3880077 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3880077' 00:26:09.438 killing process with pid 3880077 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3880077 00:26:09.438 [2024-05-15 16:04:07.998564] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:09.438 16:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3880077 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.696 16:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.225 16:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:12.225 00:26:12.225 real 0m23.399s 00:26:12.225 user 0m27.293s 00:26:12.225 sys 0m7.165s 00:26:12.225 16:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:12.225 16:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.225 ************************************ 00:26:12.225 END TEST nvmf_discovery_remove_ifc 00:26:12.225 ************************************ 00:26:12.225 16:04:10 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:12.225 16:04:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:12.225 16:04:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:12.225 16:04:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:12.225 ************************************ 00:26:12.225 START TEST nvmf_identify_kernel_target 00:26:12.225 ************************************ 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:12.225 * Looking for test storage... 00:26:12.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:12.225 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:12.226 16:04:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:18.814 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:18.814 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:18.814 Found net devices under 0000:af:00.0: cvl_0_0 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:18.814 Found net devices under 0000:af:00.1: cvl_0_1 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.814 16:04:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.814 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:26:18.815 00:26:18.815 --- 10.0.0.2 ping statistics --- 00:26:18.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.815 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:26:18.815 00:26:18.815 --- 10.0.0.1 ping statistics --- 00:26:18.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.815 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:18.815 16:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:22.101 Waiting for block devices as requested 00:26:22.101 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:22.101 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.101 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:22.360 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:22.360 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:22.360 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:22.618 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:22.619 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:22.619 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:22.619 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.877 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:22.877 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:22.877 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:23.136 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:23.136 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:23.136 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:23.394 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:23.394 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:23.394 No valid GPT data, bailing 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:23.654 16:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:23.654 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:23.654 00:26:23.654 Discovery Log Number of Records 2, Generation counter 2 00:26:23.654 =====Discovery Log Entry 0====== 00:26:23.654 trtype: tcp 00:26:23.654 adrfam: ipv4 00:26:23.654 subtype: current discovery subsystem 00:26:23.654 treq: not specified, sq flow control disable supported 00:26:23.654 portid: 1 00:26:23.654 trsvcid: 4420 00:26:23.654 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:23.654 traddr: 10.0.0.1 00:26:23.654 eflags: none 00:26:23.654 sectype: none 00:26:23.654 =====Discovery Log Entry 1====== 00:26:23.654 trtype: tcp 00:26:23.654 adrfam: ipv4 00:26:23.654 subtype: nvme subsystem 00:26:23.654 treq: not specified, sq flow control disable supported 00:26:23.654 portid: 1 00:26:23.654 trsvcid: 4420 00:26:23.654 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:23.654 traddr: 10.0.0.1 00:26:23.654 eflags: none 00:26:23.654 sectype: none 00:26:23.654 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:23.654 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:23.654 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.654 ===================================================== 00:26:23.654 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:23.654 ===================================================== 00:26:23.654 Controller Capabilities/Features 00:26:23.654 ================================ 00:26:23.654 Vendor ID: 0000 00:26:23.654 Subsystem Vendor ID: 0000 00:26:23.654 Serial Number: 3052a97071e99d1c477d 00:26:23.654 Model Number: Linux 00:26:23.654 Firmware Version: 6.7.0-68 00:26:23.654 Recommended Arb Burst: 0 00:26:23.654 IEEE OUI Identifier: 00 00 00 00:26:23.654 Multi-path I/O 00:26:23.654 May have multiple subsystem ports: No 00:26:23.654 May have multiple controllers: No 00:26:23.654 Associated with SR-IOV VF: No 00:26:23.654 Max Data Transfer Size: Unlimited 00:26:23.654 Max Number of Namespaces: 0 00:26:23.654 Max Number of I/O Queues: 1024 00:26:23.654 NVMe Specification Version (VS): 1.3 00:26:23.655 NVMe Specification Version (Identify): 1.3 00:26:23.655 Maximum Queue Entries: 1024 00:26:23.655 Contiguous Queues Required: No 00:26:23.655 Arbitration Mechanisms Supported 00:26:23.655 Weighted Round Robin: Not Supported 00:26:23.655 Vendor Specific: Not Supported 00:26:23.655 Reset Timeout: 7500 ms 00:26:23.655 Doorbell Stride: 4 bytes 00:26:23.655 NVM Subsystem Reset: Not Supported 00:26:23.655 Command Sets Supported 00:26:23.655 NVM Command Set: Supported 00:26:23.655 Boot Partition: Not Supported 00:26:23.655 Memory Page Size Minimum: 4096 bytes 00:26:23.655 Memory Page Size Maximum: 4096 bytes 00:26:23.655 Persistent Memory Region: Not Supported 00:26:23.655 Optional Asynchronous Events Supported 00:26:23.655 Namespace Attribute Notices: Not Supported 00:26:23.655 Firmware Activation Notices: Not Supported 00:26:23.655 ANA Change Notices: Not Supported 00:26:23.655 PLE Aggregate Log Change Notices: Not Supported 00:26:23.655 LBA Status Info Alert Notices: Not Supported 00:26:23.655 EGE Aggregate Log Change Notices: Not Supported 00:26:23.655 Normal NVM Subsystem Shutdown event: Not Supported 00:26:23.655 Zone Descriptor Change Notices: Not Supported 00:26:23.655 Discovery Log Change Notices: Supported 00:26:23.655 Controller Attributes 00:26:23.655 128-bit Host Identifier: Not Supported 00:26:23.655 Non-Operational Permissive Mode: Not Supported 00:26:23.655 NVM Sets: Not Supported 00:26:23.655 Read Recovery Levels: Not Supported 00:26:23.655 Endurance Groups: Not Supported 00:26:23.655 Predictable Latency Mode: Not Supported 00:26:23.655 Traffic Based Keep ALive: Not Supported 00:26:23.655 Namespace Granularity: Not Supported 00:26:23.655 SQ Associations: Not Supported 00:26:23.655 UUID List: Not Supported 00:26:23.655 Multi-Domain Subsystem: Not Supported 00:26:23.655 Fixed Capacity Management: Not Supported 00:26:23.655 Variable Capacity Management: Not Supported 00:26:23.655 Delete Endurance Group: Not Supported 00:26:23.655 Delete NVM Set: Not Supported 00:26:23.655 Extended LBA Formats Supported: Not Supported 00:26:23.655 Flexible Data Placement Supported: Not Supported 00:26:23.655 00:26:23.655 Controller Memory Buffer Support 00:26:23.655 ================================ 00:26:23.655 Supported: No 00:26:23.655 00:26:23.655 Persistent Memory Region Support 00:26:23.655 ================================ 00:26:23.655 Supported: No 00:26:23.655 00:26:23.655 Admin Command Set Attributes 00:26:23.655 ============================ 00:26:23.655 Security Send/Receive: Not Supported 00:26:23.655 Format NVM: Not Supported 00:26:23.655 Firmware Activate/Download: Not Supported 00:26:23.655 Namespace Management: Not Supported 00:26:23.655 Device Self-Test: Not Supported 00:26:23.655 Directives: Not Supported 00:26:23.655 NVMe-MI: Not Supported 00:26:23.655 Virtualization Management: Not Supported 00:26:23.655 Doorbell Buffer Config: Not Supported 00:26:23.655 Get LBA Status Capability: Not Supported 00:26:23.655 Command & Feature Lockdown Capability: Not Supported 00:26:23.655 Abort Command Limit: 1 00:26:23.655 Async Event Request Limit: 1 00:26:23.655 Number of Firmware Slots: N/A 00:26:23.655 Firmware Slot 1 Read-Only: N/A 00:26:23.655 Firmware Activation Without Reset: N/A 00:26:23.655 Multiple Update Detection Support: N/A 00:26:23.655 Firmware Update Granularity: No Information Provided 00:26:23.655 Per-Namespace SMART Log: No 00:26:23.655 Asymmetric Namespace Access Log Page: Not Supported 00:26:23.655 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:23.655 Command Effects Log Page: Not Supported 00:26:23.655 Get Log Page Extended Data: Supported 00:26:23.655 Telemetry Log Pages: Not Supported 00:26:23.655 Persistent Event Log Pages: Not Supported 00:26:23.655 Supported Log Pages Log Page: May Support 00:26:23.655 Commands Supported & Effects Log Page: Not Supported 00:26:23.655 Feature Identifiers & Effects Log Page:May Support 00:26:23.655 NVMe-MI Commands & Effects Log Page: May Support 00:26:23.655 Data Area 4 for Telemetry Log: Not Supported 00:26:23.655 Error Log Page Entries Supported: 1 00:26:23.655 Keep Alive: Not Supported 00:26:23.655 00:26:23.655 NVM Command Set Attributes 00:26:23.655 ========================== 00:26:23.655 Submission Queue Entry Size 00:26:23.655 Max: 1 00:26:23.655 Min: 1 00:26:23.655 Completion Queue Entry Size 00:26:23.655 Max: 1 00:26:23.655 Min: 1 00:26:23.655 Number of Namespaces: 0 00:26:23.655 Compare Command: Not Supported 00:26:23.655 Write Uncorrectable Command: Not Supported 00:26:23.655 Dataset Management Command: Not Supported 00:26:23.655 Write Zeroes Command: Not Supported 00:26:23.655 Set Features Save Field: Not Supported 00:26:23.655 Reservations: Not Supported 00:26:23.655 Timestamp: Not Supported 00:26:23.655 Copy: Not Supported 00:26:23.655 Volatile Write Cache: Not Present 00:26:23.655 Atomic Write Unit (Normal): 1 00:26:23.655 Atomic Write Unit (PFail): 1 00:26:23.655 Atomic Compare & Write Unit: 1 00:26:23.655 Fused Compare & Write: Not Supported 00:26:23.655 Scatter-Gather List 00:26:23.655 SGL Command Set: Supported 00:26:23.655 SGL Keyed: Not Supported 00:26:23.655 SGL Bit Bucket Descriptor: Not Supported 00:26:23.655 SGL Metadata Pointer: Not Supported 00:26:23.655 Oversized SGL: Not Supported 00:26:23.655 SGL Metadata Address: Not Supported 00:26:23.655 SGL Offset: Supported 00:26:23.655 Transport SGL Data Block: Not Supported 00:26:23.655 Replay Protected Memory Block: Not Supported 00:26:23.655 00:26:23.655 Firmware Slot Information 00:26:23.655 ========================= 00:26:23.655 Active slot: 0 00:26:23.655 00:26:23.655 00:26:23.655 Error Log 00:26:23.655 ========= 00:26:23.655 00:26:23.655 Active Namespaces 00:26:23.655 ================= 00:26:23.655 Discovery Log Page 00:26:23.655 ================== 00:26:23.655 Generation Counter: 2 00:26:23.655 Number of Records: 2 00:26:23.655 Record Format: 0 00:26:23.655 00:26:23.655 Discovery Log Entry 0 00:26:23.655 ---------------------- 00:26:23.655 Transport Type: 3 (TCP) 00:26:23.655 Address Family: 1 (IPv4) 00:26:23.655 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:23.655 Entry Flags: 00:26:23.655 Duplicate Returned Information: 0 00:26:23.655 Explicit Persistent Connection Support for Discovery: 0 00:26:23.655 Transport Requirements: 00:26:23.655 Secure Channel: Not Specified 00:26:23.655 Port ID: 1 (0x0001) 00:26:23.655 Controller ID: 65535 (0xffff) 00:26:23.656 Admin Max SQ Size: 32 00:26:23.656 Transport Service Identifier: 4420 00:26:23.656 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:23.656 Transport Address: 10.0.0.1 00:26:23.656 Discovery Log Entry 1 00:26:23.656 ---------------------- 00:26:23.656 Transport Type: 3 (TCP) 00:26:23.656 Address Family: 1 (IPv4) 00:26:23.656 Subsystem Type: 2 (NVM Subsystem) 00:26:23.656 Entry Flags: 00:26:23.656 Duplicate Returned Information: 0 00:26:23.656 Explicit Persistent Connection Support for Discovery: 0 00:26:23.656 Transport Requirements: 00:26:23.656 Secure Channel: Not Specified 00:26:23.656 Port ID: 1 (0x0001) 00:26:23.656 Controller ID: 65535 (0xffff) 00:26:23.656 Admin Max SQ Size: 32 00:26:23.656 Transport Service Identifier: 4420 00:26:23.656 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:23.656 Transport Address: 10.0.0.1 00:26:23.656 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:23.656 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.915 get_feature(0x01) failed 00:26:23.915 get_feature(0x02) failed 00:26:23.915 get_feature(0x04) failed 00:26:23.915 ===================================================== 00:26:23.915 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:23.915 ===================================================== 00:26:23.915 Controller Capabilities/Features 00:26:23.915 ================================ 00:26:23.915 Vendor ID: 0000 00:26:23.915 Subsystem Vendor ID: 0000 00:26:23.915 Serial Number: 913ffbc52365465ed45b 00:26:23.915 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:23.915 Firmware Version: 6.7.0-68 00:26:23.915 Recommended Arb Burst: 6 00:26:23.915 IEEE OUI Identifier: 00 00 00 00:26:23.915 Multi-path I/O 00:26:23.915 May have multiple subsystem ports: Yes 00:26:23.915 May have multiple controllers: Yes 00:26:23.915 Associated with SR-IOV VF: No 00:26:23.915 Max Data Transfer Size: Unlimited 00:26:23.915 Max Number of Namespaces: 1024 00:26:23.915 Max Number of I/O Queues: 128 00:26:23.915 NVMe Specification Version (VS): 1.3 00:26:23.915 NVMe Specification Version (Identify): 1.3 00:26:23.915 Maximum Queue Entries: 1024 00:26:23.915 Contiguous Queues Required: No 00:26:23.915 Arbitration Mechanisms Supported 00:26:23.915 Weighted Round Robin: Not Supported 00:26:23.915 Vendor Specific: Not Supported 00:26:23.915 Reset Timeout: 7500 ms 00:26:23.915 Doorbell Stride: 4 bytes 00:26:23.915 NVM Subsystem Reset: Not Supported 00:26:23.915 Command Sets Supported 00:26:23.915 NVM Command Set: Supported 00:26:23.915 Boot Partition: Not Supported 00:26:23.915 Memory Page Size Minimum: 4096 bytes 00:26:23.915 Memory Page Size Maximum: 4096 bytes 00:26:23.915 Persistent Memory Region: Not Supported 00:26:23.915 Optional Asynchronous Events Supported 00:26:23.915 Namespace Attribute Notices: Supported 00:26:23.915 Firmware Activation Notices: Not Supported 00:26:23.915 ANA Change Notices: Supported 00:26:23.915 PLE Aggregate Log Change Notices: Not Supported 00:26:23.915 LBA Status Info Alert Notices: Not Supported 00:26:23.915 EGE Aggregate Log Change Notices: Not Supported 00:26:23.915 Normal NVM Subsystem Shutdown event: Not Supported 00:26:23.915 Zone Descriptor Change Notices: Not Supported 00:26:23.915 Discovery Log Change Notices: Not Supported 00:26:23.915 Controller Attributes 00:26:23.915 128-bit Host Identifier: Supported 00:26:23.915 Non-Operational Permissive Mode: Not Supported 00:26:23.915 NVM Sets: Not Supported 00:26:23.915 Read Recovery Levels: Not Supported 00:26:23.915 Endurance Groups: Not Supported 00:26:23.915 Predictable Latency Mode: Not Supported 00:26:23.915 Traffic Based Keep ALive: Supported 00:26:23.915 Namespace Granularity: Not Supported 00:26:23.915 SQ Associations: Not Supported 00:26:23.915 UUID List: Not Supported 00:26:23.915 Multi-Domain Subsystem: Not Supported 00:26:23.915 Fixed Capacity Management: Not Supported 00:26:23.916 Variable Capacity Management: Not Supported 00:26:23.916 Delete Endurance Group: Not Supported 00:26:23.916 Delete NVM Set: Not Supported 00:26:23.916 Extended LBA Formats Supported: Not Supported 00:26:23.916 Flexible Data Placement Supported: Not Supported 00:26:23.916 00:26:23.916 Controller Memory Buffer Support 00:26:23.916 ================================ 00:26:23.916 Supported: No 00:26:23.916 00:26:23.916 Persistent Memory Region Support 00:26:23.916 ================================ 00:26:23.916 Supported: No 00:26:23.916 00:26:23.916 Admin Command Set Attributes 00:26:23.916 ============================ 00:26:23.916 Security Send/Receive: Not Supported 00:26:23.916 Format NVM: Not Supported 00:26:23.916 Firmware Activate/Download: Not Supported 00:26:23.916 Namespace Management: Not Supported 00:26:23.916 Device Self-Test: Not Supported 00:26:23.916 Directives: Not Supported 00:26:23.916 NVMe-MI: Not Supported 00:26:23.916 Virtualization Management: Not Supported 00:26:23.916 Doorbell Buffer Config: Not Supported 00:26:23.916 Get LBA Status Capability: Not Supported 00:26:23.916 Command & Feature Lockdown Capability: Not Supported 00:26:23.916 Abort Command Limit: 4 00:26:23.916 Async Event Request Limit: 4 00:26:23.916 Number of Firmware Slots: N/A 00:26:23.916 Firmware Slot 1 Read-Only: N/A 00:26:23.916 Firmware Activation Without Reset: N/A 00:26:23.916 Multiple Update Detection Support: N/A 00:26:23.916 Firmware Update Granularity: No Information Provided 00:26:23.916 Per-Namespace SMART Log: Yes 00:26:23.916 Asymmetric Namespace Access Log Page: Supported 00:26:23.916 ANA Transition Time : 10 sec 00:26:23.916 00:26:23.916 Asymmetric Namespace Access Capabilities 00:26:23.916 ANA Optimized State : Supported 00:26:23.916 ANA Non-Optimized State : Supported 00:26:23.916 ANA Inaccessible State : Supported 00:26:23.916 ANA Persistent Loss State : Supported 00:26:23.916 ANA Change State : Supported 00:26:23.916 ANAGRPID is not changed : No 00:26:23.916 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:23.916 00:26:23.916 ANA Group Identifier Maximum : 128 00:26:23.916 Number of ANA Group Identifiers : 128 00:26:23.916 Max Number of Allowed Namespaces : 1024 00:26:23.916 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:23.916 Command Effects Log Page: Supported 00:26:23.916 Get Log Page Extended Data: Supported 00:26:23.916 Telemetry Log Pages: Not Supported 00:26:23.916 Persistent Event Log Pages: Not Supported 00:26:23.916 Supported Log Pages Log Page: May Support 00:26:23.916 Commands Supported & Effects Log Page: Not Supported 00:26:23.916 Feature Identifiers & Effects Log Page:May Support 00:26:23.916 NVMe-MI Commands & Effects Log Page: May Support 00:26:23.916 Data Area 4 for Telemetry Log: Not Supported 00:26:23.916 Error Log Page Entries Supported: 128 00:26:23.916 Keep Alive: Supported 00:26:23.916 Keep Alive Granularity: 1000 ms 00:26:23.916 00:26:23.916 NVM Command Set Attributes 00:26:23.916 ========================== 00:26:23.916 Submission Queue Entry Size 00:26:23.916 Max: 64 00:26:23.916 Min: 64 00:26:23.916 Completion Queue Entry Size 00:26:23.916 Max: 16 00:26:23.916 Min: 16 00:26:23.916 Number of Namespaces: 1024 00:26:23.916 Compare Command: Not Supported 00:26:23.916 Write Uncorrectable Command: Not Supported 00:26:23.916 Dataset Management Command: Supported 00:26:23.916 Write Zeroes Command: Supported 00:26:23.916 Set Features Save Field: Not Supported 00:26:23.916 Reservations: Not Supported 00:26:23.916 Timestamp: Not Supported 00:26:23.916 Copy: Not Supported 00:26:23.916 Volatile Write Cache: Present 00:26:23.916 Atomic Write Unit (Normal): 1 00:26:23.916 Atomic Write Unit (PFail): 1 00:26:23.916 Atomic Compare & Write Unit: 1 00:26:23.916 Fused Compare & Write: Not Supported 00:26:23.916 Scatter-Gather List 00:26:23.916 SGL Command Set: Supported 00:26:23.916 SGL Keyed: Not Supported 00:26:23.916 SGL Bit Bucket Descriptor: Not Supported 00:26:23.916 SGL Metadata Pointer: Not Supported 00:26:23.916 Oversized SGL: Not Supported 00:26:23.916 SGL Metadata Address: Not Supported 00:26:23.916 SGL Offset: Supported 00:26:23.916 Transport SGL Data Block: Not Supported 00:26:23.916 Replay Protected Memory Block: Not Supported 00:26:23.916 00:26:23.916 Firmware Slot Information 00:26:23.916 ========================= 00:26:23.916 Active slot: 0 00:26:23.916 00:26:23.916 Asymmetric Namespace Access 00:26:23.916 =========================== 00:26:23.916 Change Count : 0 00:26:23.916 Number of ANA Group Descriptors : 1 00:26:23.916 ANA Group Descriptor : 0 00:26:23.916 ANA Group ID : 1 00:26:23.916 Number of NSID Values : 1 00:26:23.916 Change Count : 0 00:26:23.916 ANA State : 1 00:26:23.916 Namespace Identifier : 1 00:26:23.916 00:26:23.916 Commands Supported and Effects 00:26:23.916 ============================== 00:26:23.916 Admin Commands 00:26:23.916 -------------- 00:26:23.916 Get Log Page (02h): Supported 00:26:23.916 Identify (06h): Supported 00:26:23.916 Abort (08h): Supported 00:26:23.916 Set Features (09h): Supported 00:26:23.916 Get Features (0Ah): Supported 00:26:23.916 Asynchronous Event Request (0Ch): Supported 00:26:23.916 Keep Alive (18h): Supported 00:26:23.916 I/O Commands 00:26:23.916 ------------ 00:26:23.916 Flush (00h): Supported 00:26:23.916 Write (01h): Supported LBA-Change 00:26:23.916 Read (02h): Supported 00:26:23.916 Write Zeroes (08h): Supported LBA-Change 00:26:23.916 Dataset Management (09h): Supported 00:26:23.916 00:26:23.916 Error Log 00:26:23.916 ========= 00:26:23.916 Entry: 0 00:26:23.916 Error Count: 0x3 00:26:23.916 Submission Queue Id: 0x0 00:26:23.916 Command Id: 0x5 00:26:23.916 Phase Bit: 0 00:26:23.916 Status Code: 0x2 00:26:23.916 Status Code Type: 0x0 00:26:23.916 Do Not Retry: 1 00:26:23.916 Error Location: 0x28 00:26:23.916 LBA: 0x0 00:26:23.916 Namespace: 0x0 00:26:23.916 Vendor Log Page: 0x0 00:26:23.916 ----------- 00:26:23.916 Entry: 1 00:26:23.916 Error Count: 0x2 00:26:23.916 Submission Queue Id: 0x0 00:26:23.916 Command Id: 0x5 00:26:23.916 Phase Bit: 0 00:26:23.916 Status Code: 0x2 00:26:23.916 Status Code Type: 0x0 00:26:23.916 Do Not Retry: 1 00:26:23.916 Error Location: 0x28 00:26:23.916 LBA: 0x0 00:26:23.916 Namespace: 0x0 00:26:23.916 Vendor Log Page: 0x0 00:26:23.916 ----------- 00:26:23.916 Entry: 2 00:26:23.916 Error Count: 0x1 00:26:23.916 Submission Queue Id: 0x0 00:26:23.916 Command Id: 0x4 00:26:23.916 Phase Bit: 0 00:26:23.916 Status Code: 0x2 00:26:23.916 Status Code Type: 0x0 00:26:23.916 Do Not Retry: 1 00:26:23.916 Error Location: 0x28 00:26:23.916 LBA: 0x0 00:26:23.916 Namespace: 0x0 00:26:23.916 Vendor Log Page: 0x0 00:26:23.916 00:26:23.916 Number of Queues 00:26:23.916 ================ 00:26:23.916 Number of I/O Submission Queues: 128 00:26:23.916 Number of I/O Completion Queues: 128 00:26:23.916 00:26:23.916 ZNS Specific Controller Data 00:26:23.916 ============================ 00:26:23.916 Zone Append Size Limit: 0 00:26:23.916 00:26:23.916 00:26:23.916 Active Namespaces 00:26:23.916 ================= 00:26:23.916 get_feature(0x05) failed 00:26:23.916 Namespace ID:1 00:26:23.916 Command Set Identifier: NVM (00h) 00:26:23.916 Deallocate: Supported 00:26:23.916 Deallocated/Unwritten Error: Not Supported 00:26:23.916 Deallocated Read Value: Unknown 00:26:23.916 Deallocate in Write Zeroes: Not Supported 00:26:23.916 Deallocated Guard Field: 0xFFFF 00:26:23.916 Flush: Supported 00:26:23.916 Reservation: Not Supported 00:26:23.916 Namespace Sharing Capabilities: Multiple Controllers 00:26:23.916 Size (in LBAs): 3125627568 (1490GiB) 00:26:23.916 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:23.916 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:23.916 UUID: 31f35537-1b0f-4be5-a49e-6626e865980d 00:26:23.916 Thin Provisioning: Not Supported 00:26:23.916 Per-NS Atomic Units: Yes 00:26:23.916 Atomic Boundary Size (Normal): 0 00:26:23.916 Atomic Boundary Size (PFail): 0 00:26:23.916 Atomic Boundary Offset: 0 00:26:23.916 NGUID/EUI64 Never Reused: No 00:26:23.916 ANA group ID: 1 00:26:23.916 Namespace Write Protected: No 00:26:23.916 Number of LBA Formats: 1 00:26:23.916 Current LBA Format: LBA Format #00 00:26:23.916 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:23.916 00:26:23.916 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:23.916 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.916 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:23.916 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.916 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:23.916 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.916 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.916 rmmod nvme_tcp 00:26:23.917 rmmod nvme_fabrics 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.917 16:04:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:25.829 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:26.087 16:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:29.367 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:29.367 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:30.744 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:30.744 00:26:30.744 real 0m18.643s 00:26:30.744 user 0m4.255s 00:26:30.744 sys 0m9.952s 00:26:30.744 16:04:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:30.744 16:04:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.744 ************************************ 00:26:30.744 END TEST nvmf_identify_kernel_target 00:26:30.744 ************************************ 00:26:30.744 16:04:29 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:30.744 16:04:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:30.744 16:04:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:30.744 16:04:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.744 ************************************ 00:26:30.744 START TEST nvmf_auth_host 00:26:30.744 ************************************ 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:30.744 * Looking for test storage... 00:26:30.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:30.744 16:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:37.307 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:37.307 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:37.307 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:37.308 Found net devices under 0000:af:00.0: cvl_0_0 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:37.308 Found net devices under 0000:af:00.1: cvl_0_1 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:37.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:26:37.308 00:26:37.308 --- 10.0.0.2 ping statistics --- 00:26:37.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.308 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:26:37.308 00:26:37.308 --- 10.0.0.1 ping statistics --- 00:26:37.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.308 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3892882 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3892882 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3892882 ']' 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:37.308 16:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=904b6d4316a980aaac11ce266ab4f965 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zf6 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 904b6d4316a980aaac11ce266ab4f965 0 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 904b6d4316a980aaac11ce266ab4f965 0 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=904b6d4316a980aaac11ce266ab4f965 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zf6 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zf6 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zf6 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=28bc7a862769bc8e8c1edb9338e6667c5294459e729d6b012af7b8aec64a3430 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.00G 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 28bc7a862769bc8e8c1edb9338e6667c5294459e729d6b012af7b8aec64a3430 3 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 28bc7a862769bc8e8c1edb9338e6667c5294459e729d6b012af7b8aec64a3430 3 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=28bc7a862769bc8e8c1edb9338e6667c5294459e729d6b012af7b8aec64a3430 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:38.244 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.00G 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.00G 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.00G 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:38.502 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e706141a19f3647afda4f7919916e6d3386d668694caa5f2 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jSl 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e706141a19f3647afda4f7919916e6d3386d668694caa5f2 0 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e706141a19f3647afda4f7919916e6d3386d668694caa5f2 0 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e706141a19f3647afda4f7919916e6d3386d668694caa5f2 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jSl 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jSl 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jSl 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=54b906860edf504d7801747303ffd053c4a8bcd125adfc5f 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rEs 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 54b906860edf504d7801747303ffd053c4a8bcd125adfc5f 2 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 54b906860edf504d7801747303ffd053c4a8bcd125adfc5f 2 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=54b906860edf504d7801747303ffd053c4a8bcd125adfc5f 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rEs 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rEs 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.rEs 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1ff29a6bcffe50573ade810c21071dde 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pJn 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1ff29a6bcffe50573ade810c21071dde 1 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1ff29a6bcffe50573ade810c21071dde 1 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1ff29a6bcffe50573ade810c21071dde 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:38.503 16:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pJn 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pJn 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pJn 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=617c31076e25eda29851638b1ccc80b6 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gUy 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 617c31076e25eda29851638b1ccc80b6 1 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 617c31076e25eda29851638b1ccc80b6 1 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=617c31076e25eda29851638b1ccc80b6 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:38.503 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gUy 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gUy 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gUy 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4b5fab80347843941ca2986f70d91837af71c56c8f4bc79 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.r2c 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4b5fab80347843941ca2986f70d91837af71c56c8f4bc79 2 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4b5fab80347843941ca2986f70d91837af71c56c8f4bc79 2 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4b5fab80347843941ca2986f70d91837af71c56c8f4bc79 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.r2c 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.r2c 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.r2c 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=db5f9fbed0d3b084e071aa9b88cdae6c 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eni 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key db5f9fbed0d3b084e071aa9b88cdae6c 0 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 db5f9fbed0d3b084e071aa9b88cdae6c 0 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=db5f9fbed0d3b084e071aa9b88cdae6c 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eni 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eni 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.eni 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=108223566ff7e7ebc95d4b51467a8c207740f52d416e7b52f89038d2a1ed216b 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6RN 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 108223566ff7e7ebc95d4b51467a8c207740f52d416e7b52f89038d2a1ed216b 3 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 108223566ff7e7ebc95d4b51467a8c207740f52d416e7b52f89038d2a1ed216b 3 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=108223566ff7e7ebc95d4b51467a8c207740f52d416e7b52f89038d2a1ed216b 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6RN 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6RN 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6RN 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3892882 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3892882 ']' 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:38.762 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zf6 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.00G ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.00G 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jSl 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.rEs ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rEs 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pJn 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gUy ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gUy 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.r2c 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.eni ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.eni 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6RN 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:39.057 16:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:42.345 Waiting for block devices as requested 00:26:42.345 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:42.345 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:42.604 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:42.604 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:42.604 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:42.862 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:42.862 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:42.862 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:42.862 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:43.120 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:43.120 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:43.120 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:43.378 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:43.378 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:43.378 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:43.635 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:43.635 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:44.566 No valid GPT data, bailing 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:44.566 00:26:44.566 Discovery Log Number of Records 2, Generation counter 2 00:26:44.566 =====Discovery Log Entry 0====== 00:26:44.566 trtype: tcp 00:26:44.566 adrfam: ipv4 00:26:44.566 subtype: current discovery subsystem 00:26:44.566 treq: not specified, sq flow control disable supported 00:26:44.566 portid: 1 00:26:44.566 trsvcid: 4420 00:26:44.566 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:44.566 traddr: 10.0.0.1 00:26:44.566 eflags: none 00:26:44.566 sectype: none 00:26:44.566 =====Discovery Log Entry 1====== 00:26:44.566 trtype: tcp 00:26:44.566 adrfam: ipv4 00:26:44.566 subtype: nvme subsystem 00:26:44.566 treq: not specified, sq flow control disable supported 00:26:44.566 portid: 1 00:26:44.566 trsvcid: 4420 00:26:44.566 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:44.566 traddr: 10.0.0.1 00:26:44.566 eflags: none 00:26:44.566 sectype: none 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.566 16:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.824 nvme0n1 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.824 nvme0n1 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.824 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.082 nvme0n1 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.082 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.340 nvme0n1 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.340 16:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.598 nvme0n1 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.598 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.599 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.857 nvme0n1 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.857 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.858 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.115 nvme0n1 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.115 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.116 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.116 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.116 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.116 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.116 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.116 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.373 nvme0n1 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:46.373 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.374 16:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.631 nvme0n1 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:46.631 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.632 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.889 nvme0n1 00:26:46.889 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.890 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.147 nvme0n1 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.147 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.404 nvme0n1 00:26:47.404 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.404 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.404 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.404 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.404 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.405 16:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.662 nvme0n1 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.663 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.920 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.179 nvme0n1 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.179 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.437 nvme0n1 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.437 16:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.695 nvme0n1 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.695 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.260 nvme0n1 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.260 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.261 16:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.518 nvme0n1 00:26:49.518 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.518 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.519 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.084 nvme0n1 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.084 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.085 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.342 nvme0n1 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.342 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.599 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.599 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.599 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:50.599 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.599 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.600 16:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.858 nvme0n1 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.858 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.423 nvme0n1 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.423 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.681 16:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.681 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 nvme0n1 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.246 16:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.875 nvme0n1 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.875 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.876 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 nvme0n1 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.442 16:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.008 nvme0n1 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.008 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.266 nvme0n1 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.266 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.267 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.532 nvme0n1 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.532 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.533 16:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.533 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.790 nvme0n1 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.790 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.048 nvme0n1 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.048 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.049 nvme0n1 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.049 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.306 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.307 nvme0n1 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.307 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.565 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.566 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.566 nvme0n1 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.566 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.824 nvme0n1 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.824 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.082 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:56.082 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.083 nvme0n1 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.083 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.341 nvme0n1 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.341 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.342 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.599 nvme0n1 00:26:56.599 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.599 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.599 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.599 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.599 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.599 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.857 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.115 nvme0n1 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.115 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.116 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.374 nvme0n1 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:57.374 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.375 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.633 nvme0n1 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.633 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.891 nvme0n1 00:26:57.891 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.891 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.891 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.892 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.458 nvme0n1 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.458 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.459 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.716 nvme0n1 00:26:58.716 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.716 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.716 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.716 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.716 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.716 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:58.974 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.975 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 nvme0n1 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.233 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.799 nvme0n1 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.799 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.057 nvme0n1 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.057 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.315 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.881 nvme0n1 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.881 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.882 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.448 nvme0n1 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.448 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.015 nvme0n1 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.015 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.581 nvme0n1 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.581 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.839 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.406 nvme0n1 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.406 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.407 nvme0n1 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.407 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.666 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.666 nvme0n1 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.666 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.667 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.667 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.667 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 nvme0n1 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.924 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.925 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.183 nvme0n1 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.183 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.447 nvme0n1 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.448 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.707 nvme0n1 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.707 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.964 nvme0n1 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.965 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.222 nvme0n1 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.222 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.223 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.516 nvme0n1 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.516 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.517 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 nvme0n1 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.785 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.043 nvme0n1 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.043 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.044 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 nvme0n1 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.302 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.561 nvme0n1 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.561 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.820 nvme0n1 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.820 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.078 nvme0n1 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.078 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.337 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.595 nvme0n1 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.595 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.596 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.162 nvme0n1 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.162 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.421 nvme0n1 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.421 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.680 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.680 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.939 nvme0n1 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.939 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.504 nvme0n1 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA0YjZkNDMxNmE5ODBhYWFjMTFjZTI2NmFiNGY5NjVg4/nH: 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhiYzdhODYyNzY5YmM4ZThjMWVkYjkzMzhlNjY2N2M1Mjk0NDU5ZTcyOWQ2YjAxMmFmN2I4YWVjNjRhMzQzMAbq+D4=: 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.504 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.070 nvme0n1 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.071 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.633 nvme0n1 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWZmMjlhNmJjZmZlNTA1NzNhZGU4MTBjMjEwNzFkZGVjX7K/: 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: ]] 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE3YzMxMDc2ZTI1ZWRhMjk4NTE2MzhiMWNjYzgwYjakLgsy: 00:27:10.633 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.634 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.197 nvme0n1 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNWZhYjgwMzQ3ODQzOTQxY2EyOTg2ZjcwZDkxODM3YWY3MWM1NmM4ZjRiYzc5DLi+SA==: 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: ]] 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGI1ZjlmYmVkMGQzYjA4NGUwNzFhYTliODhjZGFlNmMQrivC: 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.197 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.453 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.017 nvme0n1 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA4MjIzNTY2ZmY3ZTdlYmM5NWQ0YjUxNDY3YThjMjA3NzQwZjUyZDQxNmU3YjUyZjg5MDM4ZDJhMWVkMjE2Yq80WMA=: 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.017 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.583 nvme0n1 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.583 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTcwNjE0MWExOWYzNjQ3YWZkYTRmNzkxOTkxNmU2ZDMzODZkNjY4Njk0Y2FhNWYyo0nn7Q==: 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRiOTA2ODYwZWRmNTA0ZDc4MDE3NDczMDNmZmQwNTNjNGE4YmNkMTI1YWRmYzVm4ZsrLg==: 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.583 request: 00:27:12.583 { 00:27:12.583 "name": "nvme0", 00:27:12.583 "trtype": "tcp", 00:27:12.583 "traddr": "10.0.0.1", 00:27:12.583 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:12.583 "adrfam": "ipv4", 00:27:12.583 "trsvcid": "4420", 00:27:12.583 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:12.583 "method": "bdev_nvme_attach_controller", 00:27:12.583 "req_id": 1 00:27:12.583 } 00:27:12.583 Got JSON-RPC error response 00:27:12.583 response: 00:27:12.583 { 00:27:12.583 "code": -32602, 00:27:12.583 "message": "Invalid parameters" 00:27:12.583 } 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:12.583 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.584 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:12.584 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.584 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:12.584 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.584 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.842 request: 00:27:12.842 { 00:27:12.842 "name": "nvme0", 00:27:12.842 "trtype": "tcp", 00:27:12.842 "traddr": "10.0.0.1", 00:27:12.842 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:12.842 "adrfam": "ipv4", 00:27:12.842 "trsvcid": "4420", 00:27:12.842 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:12.842 "dhchap_key": "key2", 00:27:12.842 "method": "bdev_nvme_attach_controller", 00:27:12.842 "req_id": 1 00:27:12.842 } 00:27:12.842 Got JSON-RPC error response 00:27:12.842 response: 00:27:12.842 { 00:27:12.842 "code": -32602, 00:27:12.842 "message": "Invalid parameters" 00:27:12.842 } 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.842 request: 00:27:12.842 { 00:27:12.842 "name": "nvme0", 00:27:12.842 "trtype": "tcp", 00:27:12.842 "traddr": "10.0.0.1", 00:27:12.842 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:12.842 "adrfam": "ipv4", 00:27:12.842 "trsvcid": "4420", 00:27:12.842 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:12.842 "dhchap_key": "key1", 00:27:12.842 "dhchap_ctrlr_key": "ckey2", 00:27:12.842 "method": "bdev_nvme_attach_controller", 00:27:12.842 "req_id": 1 00:27:12.842 } 00:27:12.842 Got JSON-RPC error response 00:27:12.842 response: 00:27:12.842 { 00:27:12.842 "code": -32602, 00:27:12.842 "message": "Invalid parameters" 00:27:12.842 } 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.842 rmmod nvme_tcp 00:27:12.842 rmmod nvme_fabrics 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3892882 ']' 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3892882 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3892882 ']' 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3892882 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:12.842 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3892882 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3892882' 00:27:13.100 killing process with pid 3892882 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3892882 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3892882 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.100 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:15.630 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:18.161 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:18.161 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:18.420 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:18.420 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:18.420 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:18.420 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:18.420 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:18.420 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:18.420 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:19.797 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:20.054 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zf6 /tmp/spdk.key-null.jSl /tmp/spdk.key-sha256.pJn /tmp/spdk.key-sha384.r2c /tmp/spdk.key-sha512.6RN /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:20.054 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:23.385 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:23.386 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:23.386 00:27:23.386 real 0m52.469s 00:27:23.386 user 0m45.107s 00:27:23.386 sys 0m14.708s 00:27:23.386 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:23.386 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 ************************************ 00:27:23.386 END TEST nvmf_auth_host 00:27:23.386 ************************************ 00:27:23.386 16:05:21 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:27:23.386 16:05:21 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:23.386 16:05:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:23.386 16:05:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:23.386 16:05:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 ************************************ 00:27:23.386 START TEST nvmf_digest 00:27:23.386 ************************************ 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:23.386 * Looking for test storage... 00:27:23.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:23.386 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:23.387 16:05:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:23.387 16:05:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:29.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:29.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.946 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:29.947 Found net devices under 0000:af:00.0: cvl_0_0 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:29.947 Found net devices under 0000:af:00.1: cvl_0_1 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.947 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:27:30.234 00:27:30.234 --- 10.0.0.2 ping statistics --- 00:27:30.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.234 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:27:30.234 00:27:30.234 --- 10.0.0.1 ping statistics --- 00:27:30.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.234 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:30.234 ************************************ 00:27:30.234 START TEST nvmf_digest_clean 00:27:30.234 ************************************ 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3906583 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3906583 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3906583 ']' 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:30.234 16:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:30.234 [2024-05-15 16:05:28.689629] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:30.234 [2024-05-15 16:05:28.689674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.234 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.234 [2024-05-15 16:05:28.764126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.493 [2024-05-15 16:05:28.836789] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.493 [2024-05-15 16:05:28.836822] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.493 [2024-05-15 16:05:28.836831] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.493 [2024-05-15 16:05:28.836839] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.493 [2024-05-15 16:05:28.836862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.493 [2024-05-15 16:05:28.836882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.060 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.060 null0 00:27:31.060 [2024-05-15 16:05:29.614826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.318 [2024-05-15 16:05:29.638833] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:31.318 [2024-05-15 16:05:29.639089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3906687 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3906687 /var/tmp/bperf.sock 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3906687 ']' 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:31.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:31.318 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.318 [2024-05-15 16:05:29.692453] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:31.318 [2024-05-15 16:05:29.692501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906687 ] 00:27:31.318 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.318 [2024-05-15 16:05:29.763064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.318 [2024-05-15 16:05:29.832147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.250 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.250 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:32.250 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:32.250 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:32.250 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:32.250 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.250 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.816 nvme0n1 00:27:32.816 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:32.816 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.816 Running I/O for 2 seconds... 00:27:34.714 00:27:34.714 Latency(us) 00:27:34.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.714 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:34.714 nvme0n1 : 2.00 28654.07 111.93 0.00 0.00 4462.35 2293.76 19293.80 00:27:34.714 =================================================================================================================== 00:27:34.714 Total : 28654.07 111.93 0.00 0.00 4462.35 2293.76 19293.80 00:27:34.714 0 00:27:34.714 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:34.714 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:34.714 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:34.714 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:34.714 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:34.714 | select(.opcode=="crc32c") 00:27:34.714 | "\(.module_name) \(.executed)"' 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3906687 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3906687 ']' 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3906687 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3906687 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3906687' 00:27:34.972 killing process with pid 3906687 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3906687 00:27:34.972 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.972 00:27:34.972 Latency(us) 00:27:34.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.972 =================================================================================================================== 00:27:34.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.972 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3906687 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3907413 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3907413 /var/tmp/bperf.sock 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3907413 ']' 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:35.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:35.229 16:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:35.229 [2024-05-15 16:05:33.724747] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:35.229 [2024-05-15 16:05:33.724796] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907413 ] 00:27:35.229 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:35.229 Zero copy mechanism will not be used. 00:27:35.229 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.487 [2024-05-15 16:05:33.793067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.487 [2024-05-15 16:05:33.857387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.052 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:36.052 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:36.052 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:36.052 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:36.052 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:36.308 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:36.308 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:36.566 nvme0n1 00:27:36.566 16:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:36.566 16:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:36.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:36.823 Zero copy mechanism will not be used. 00:27:36.823 Running I/O for 2 seconds... 00:27:38.795 00:27:38.795 Latency(us) 00:27:38.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.795 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:38.795 nvme0n1 : 2.00 2723.38 340.42 0.00 0.00 5872.31 5216.67 25270.68 00:27:38.795 =================================================================================================================== 00:27:38.795 Total : 2723.38 340.42 0.00 0.00 5872.31 5216.67 25270.68 00:27:38.795 0 00:27:38.795 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:38.795 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:38.795 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:38.795 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:38.795 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:38.795 | select(.opcode=="crc32c") 00:27:38.795 | "\(.module_name) \(.executed)"' 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3907413 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3907413 ']' 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3907413 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3907413 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3907413' 00:27:39.077 killing process with pid 3907413 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3907413 00:27:39.077 Received shutdown signal, test time was about 2.000000 seconds 00:27:39.077 00:27:39.077 Latency(us) 00:27:39.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.077 =================================================================================================================== 00:27:39.077 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.077 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3907413 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3908045 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3908045 /var/tmp/bperf.sock 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3908045 ']' 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:39.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:39.335 16:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:39.335 [2024-05-15 16:05:37.702941] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:39.335 [2024-05-15 16:05:37.702993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908045 ] 00:27:39.335 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.335 [2024-05-15 16:05:37.773580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.335 [2024-05-15 16:05:37.848099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.268 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:40.268 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:40.268 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:40.268 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:40.268 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:40.268 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.268 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.526 nvme0n1 00:27:40.526 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:40.526 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:40.526 Running I/O for 2 seconds... 00:27:43.051 00:27:43.051 Latency(us) 00:27:43.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.051 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:43.051 nvme0n1 : 2.00 27874.85 108.89 0.00 0.00 4584.24 2634.55 23173.53 00:27:43.051 =================================================================================================================== 00:27:43.051 Total : 27874.85 108.89 0.00 0.00 4584.24 2634.55 23173.53 00:27:43.051 0 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:43.051 | select(.opcode=="crc32c") 00:27:43.051 | "\(.module_name) \(.executed)"' 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3908045 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3908045 ']' 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3908045 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3908045 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3908045' 00:27:43.051 killing process with pid 3908045 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3908045 00:27:43.051 Received shutdown signal, test time was about 2.000000 seconds 00:27:43.051 00:27:43.051 Latency(us) 00:27:43.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.051 =================================================================================================================== 00:27:43.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3908045 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3908767 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3908767 /var/tmp/bperf.sock 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3908767 ']' 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.051 16:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.051 [2024-05-15 16:05:41.590177] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:43.051 [2024-05-15 16:05:41.590235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908767 ] 00:27:43.051 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:43.051 Zero copy mechanism will not be used. 00:27:43.309 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.309 [2024-05-15 16:05:41.659604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.309 [2024-05-15 16:05:41.733628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.874 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:43.874 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:43.874 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:43.874 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:43.874 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:44.131 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.131 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.696 nvme0n1 00:27:44.696 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:44.696 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.696 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:44.696 Zero copy mechanism will not be used. 00:27:44.696 Running I/O for 2 seconds... 00:27:46.597 00:27:46.597 Latency(us) 00:27:46.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.597 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:46.597 nvme0n1 : 2.01 1963.92 245.49 0.00 0.00 8131.97 6081.74 29360.13 00:27:46.597 =================================================================================================================== 00:27:46.597 Total : 1963.92 245.49 0.00 0.00 8131.97 6081.74 29360.13 00:27:46.597 0 00:27:46.597 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:46.597 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:46.597 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:46.598 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:46.598 | select(.opcode=="crc32c") 00:27:46.598 | "\(.module_name) \(.executed)"' 00:27:46.598 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3908767 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3908767 ']' 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3908767 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3908767 00:27:46.855 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:46.856 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:46.856 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3908767' 00:27:46.856 killing process with pid 3908767 00:27:46.856 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3908767 00:27:46.856 Received shutdown signal, test time was about 2.000000 seconds 00:27:46.856 00:27:46.856 Latency(us) 00:27:46.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.856 =================================================================================================================== 00:27:46.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:46.856 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3908767 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3906583 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3906583 ']' 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3906583 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3906583 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3906583' 00:27:47.114 killing process with pid 3906583 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3906583 00:27:47.114 [2024-05-15 16:05:45.614500] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:47.114 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3906583 00:27:47.372 00:27:47.372 real 0m17.187s 00:27:47.372 user 0m33.028s 00:27:47.372 sys 0m4.444s 00:27:47.372 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:47.372 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:47.372 ************************************ 00:27:47.372 END TEST nvmf_digest_clean 00:27:47.372 ************************************ 00:27:47.372 16:05:45 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:47.372 16:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:47.373 ************************************ 00:27:47.373 START TEST nvmf_digest_error 00:27:47.373 ************************************ 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3909517 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3909517 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3909517 ']' 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.373 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:47.631 [2024-05-15 16:05:45.955890] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:47.631 [2024-05-15 16:05:45.955934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.631 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.631 [2024-05-15 16:05:46.027046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.631 [2024-05-15 16:05:46.100478] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.631 [2024-05-15 16:05:46.100511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.631 [2024-05-15 16:05:46.100521] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.631 [2024-05-15 16:05:46.100529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.631 [2024-05-15 16:05:46.100536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.631 [2024-05-15 16:05:46.100556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.197 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.197 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:48.197 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:48.197 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.197 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.455 [2024-05-15 16:05:46.790595] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.455 null0 00:27:48.455 [2024-05-15 16:05:46.879327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.455 [2024-05-15 16:05:46.903323] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:48.455 [2024-05-15 16:05:46.903552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3909627 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3909627 /var/tmp/bperf.sock 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3909627 ']' 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.455 16:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.455 [2024-05-15 16:05:46.954765] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:48.455 [2024-05-15 16:05:46.954810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909627 ] 00:27:48.455 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.712 [2024-05-15 16:05:47.025481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.712 [2024-05-15 16:05:47.099960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.277 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:49.277 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:49.277 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.277 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.534 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:49.534 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.534 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.534 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.534 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.792 nvme0n1 00:27:49.792 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:49.792 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.792 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.792 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.792 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:49.792 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.792 Running I/O for 2 seconds... 00:27:49.792 [2024-05-15 16:05:48.315185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:49.792 [2024-05-15 16:05:48.315222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.792 [2024-05-15 16:05:48.315235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.793 [2024-05-15 16:05:48.324184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:49.793 [2024-05-15 16:05:48.324213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.793 [2024-05-15 16:05:48.324225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.793 [2024-05-15 16:05:48.333283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:49.793 [2024-05-15 16:05:48.333314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.793 [2024-05-15 16:05:48.333325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.793 [2024-05-15 16:05:48.342431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:49.793 [2024-05-15 16:05:48.342453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.793 [2024-05-15 16:05:48.342463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.793 [2024-05-15 16:05:48.351386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:49.793 [2024-05-15 16:05:48.351408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.793 [2024-05-15 16:05:48.351419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.050 [2024-05-15 16:05:48.359916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.050 [2024-05-15 16:05:48.359941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.050 [2024-05-15 16:05:48.359952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.050 [2024-05-15 16:05:48.369813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.050 [2024-05-15 16:05:48.369835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.050 [2024-05-15 16:05:48.369846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.050 [2024-05-15 16:05:48.379294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.050 [2024-05-15 16:05:48.379317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.050 [2024-05-15 16:05:48.379331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.050 [2024-05-15 16:05:48.387140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.050 [2024-05-15 16:05:48.387162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.050 [2024-05-15 16:05:48.387172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.050 [2024-05-15 16:05:48.396709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.050 [2024-05-15 16:05:48.396730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.396741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.405516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.405537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.405547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.414706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.414727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.414737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.423229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.423251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.423261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.432078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.432098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.432108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.440717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.440737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.440747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.449882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.449903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.449913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.459176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.459204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.459214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.467296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.467316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.467327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.476664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.476686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.476697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.485802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.485824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.485835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.493976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.493997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.494007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.502852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.502873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.502884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.512711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.512743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.521347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.521368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.521378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.530364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.530384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.530394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.538501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.538522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.538532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.547857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.547877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.547887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.557073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.557094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.557104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.564800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.564821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.564831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.574619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.574641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.574652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.582994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.583015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.583025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.592785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.592807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.592817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.601508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.601528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.601538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.051 [2024-05-15 16:05:48.610445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.051 [2024-05-15 16:05:48.610468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.051 [2024-05-15 16:05:48.610483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.308 [2024-05-15 16:05:48.620703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.308 [2024-05-15 16:05:48.620728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.308 [2024-05-15 16:05:48.620739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.308 [2024-05-15 16:05:48.628455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.308 [2024-05-15 16:05:48.628476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.628487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.638578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.638600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.638611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.647555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.647578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.647589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.656115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.656138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.656148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.664862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.664882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.664893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.673592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.673618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.673628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.683359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.683380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.683390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.691251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.691275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.691286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.700483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.700505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.700516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.709174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.709202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.709213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.717980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.718001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.718011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.726936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.726957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.726967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.736180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.736208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.736233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.744248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.744269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.744280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.754303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.754324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.754335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.761456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.761479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.761492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.771379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.771402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.771413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.779920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.779943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.779954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.789145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.789168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.789179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.798415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.798437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.798448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.807072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.807094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.807105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.816100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.816122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.816132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.824820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.824842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.824852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.833959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.833981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.833991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.842984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.843008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.843019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.851616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.851637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.851647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.860808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.860830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.860840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.309 [2024-05-15 16:05:48.868980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.309 [2024-05-15 16:05:48.869004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.309 [2024-05-15 16:05:48.869015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.879295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.879320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.879331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.888383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.888406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.888416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.896714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.896735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.896746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.905968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.905990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.906000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.914999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.915021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.915031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.923217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.923239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.923249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.932690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.932711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.932721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.942231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.942252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.942262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.950187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.950214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.950240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.960119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.960140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.960150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.968624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.968645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.968656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.977499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.977521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.977531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.986918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.986940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.986950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:48.996185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:48.996213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:48.996227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:49.004737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:49.004759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:49.004769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:49.013002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:49.013024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.566 [2024-05-15 16:05:49.013034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.566 [2024-05-15 16:05:49.022345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.566 [2024-05-15 16:05:49.022366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.022376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.031738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.031759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.031769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.039493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.039525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.048511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.048532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.048542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.057640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.057661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.057671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.066660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.066681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.066691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.075533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.075557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.075568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.084144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.084166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.084177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.093154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.093176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.093186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.102163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.102186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.102202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.111216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.111237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.111247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.567 [2024-05-15 16:05:49.120547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.567 [2024-05-15 16:05:49.120569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.567 [2024-05-15 16:05:49.120580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.129241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.129265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.129277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.137754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.137778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.137789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.148001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.148023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.148034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.155381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.155403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.155413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.165160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.165182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.165198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.172785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.172806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.172817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.183538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.183560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.183570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.191620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.191642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.191652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.201824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.201847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.201857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.209398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.209419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.209430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.218863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.218885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.218895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.227808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.227833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.227843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.237286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.237307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.237317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.245860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.245882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.245892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.255791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.255813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.255823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.265502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.265523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.265533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.273097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.273118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.273129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.282749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.282771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.282782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.292399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.292420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.292430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.824 [2024-05-15 16:05:49.300118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.824 [2024-05-15 16:05:49.300139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.824 [2024-05-15 16:05:49.300149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.310202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.310225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.310236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.326817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.326838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.326848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.336898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.336920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.336931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.346627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.346647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.346658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.355284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.355305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.355315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.366317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.366338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.366348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.374437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.374458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.374468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.825 [2024-05-15 16:05:49.385163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:50.825 [2024-05-15 16:05:49.385188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.825 [2024-05-15 16:05:49.385204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.394602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.394625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.081 [2024-05-15 16:05:49.394640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.403260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.403282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.081 [2024-05-15 16:05:49.403292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.412137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.412158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.081 [2024-05-15 16:05:49.412168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.424848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.424869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.081 [2024-05-15 16:05:49.424879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.435632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.435652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.081 [2024-05-15 16:05:49.435662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.443621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.443641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.081 [2024-05-15 16:05:49.443651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.453323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.453345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.081 [2024-05-15 16:05:49.453355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.081 [2024-05-15 16:05:49.465206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.081 [2024-05-15 16:05:49.465226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.465237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.475592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.475613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.475624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.483780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.483805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.483815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.492949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.492969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.492979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.501588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.501610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.501621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.514012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.514034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.514045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.524230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.524255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.524266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.533400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.533421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.533431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.544523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.544543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.544554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.556262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.556283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.556293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.563913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.563935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.563945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.575555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.575576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.575587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.585878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.585898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.585908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.595964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.595985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.595995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.606139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.606159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.606170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.616086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.616107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.616117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.624183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.624210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.624220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.633510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.633531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.633541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.082 [2024-05-15 16:05:49.643516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.082 [2024-05-15 16:05:49.643540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.082 [2024-05-15 16:05:49.643551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.339 [2024-05-15 16:05:49.652224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.339 [2024-05-15 16:05:49.652251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.339 [2024-05-15 16:05:49.652263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.661143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.661166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.661176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.669656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.669679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.669691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.679202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.679224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.679235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.688401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.688422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.688433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.697026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.697057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.707058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.707080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.707090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.719049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.719070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.719081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.727532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.727553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.727563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.737630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.737651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.737662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.746831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.746851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.746861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.755195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.755215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.755225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.768997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.769017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.769027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.780713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.780734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.780744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.790889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.790909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.790919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.798916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.798936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.798947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.809718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.809740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.809750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.818416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.818437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.827161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.827183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.827199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.836988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.837009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.837019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.845912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.845933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.845943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.856032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.856052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.856062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.866759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.866780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.866790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.874883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.874903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.874914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.886013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.886034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.886044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.340 [2024-05-15 16:05:49.896822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.340 [2024-05-15 16:05:49.896843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.340 [2024-05-15 16:05:49.896853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.598 [2024-05-15 16:05:49.904952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.598 [2024-05-15 16:05:49.904980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.598 [2024-05-15 16:05:49.904992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.598 [2024-05-15 16:05:49.915129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.915153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.915164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.924553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.924574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.924585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.933794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.933816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.933827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.943399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.943421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.943432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.951590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.951611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.951622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.960697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.960719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.960729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.969057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.969079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.969089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.977783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.977804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.977814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.987931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.987952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.987963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:49.996053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:49.996074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:49.996084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.006114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.006136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.006146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.015373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.015393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.015403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.023568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.023590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.023601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.033591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.033613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.033623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.043342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.043364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.043374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.052923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.052945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.052956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.060998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.061020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.061051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.070805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.070827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.070838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.080542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.080563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.080575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.089241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.089262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.089272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.098892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.098913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.098924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.108894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.108917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.108928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.116887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.116908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.116918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.127547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.127569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.127579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.135517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.135539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.135549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.146589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.146611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.146621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.599 [2024-05-15 16:05:50.154837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.599 [2024-05-15 16:05:50.154858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.599 [2024-05-15 16:05:50.154868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.164619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.164649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.164665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.173482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.173505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.173516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.182641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.182663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.182674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.191877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.191898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.191909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.200902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.200922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.200933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.210256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.210276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.210287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.219098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.219118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.219132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.227887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.227909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.227919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.237283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.237304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.237315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.245832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.245853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.245864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.255723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.255744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.255755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.265174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.265200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.265212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.274747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.274768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.274778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.283593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.283613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.283623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 [2024-05-15 16:05:50.293575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a7c40) 00:27:51.858 [2024-05-15 16:05:50.293595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.858 [2024-05-15 16:05:50.293606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.858 00:27:51.858 Latency(us) 00:27:51.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.858 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:51.858 nvme0n1 : 2.04 26797.79 104.68 0.00 0.00 4697.56 2215.12 44040.19 00:27:51.858 =================================================================================================================== 00:27:51.858 Total : 26797.79 104.68 0.00 0.00 4697.56 2215.12 44040.19 00:27:51.858 0 00:27:51.858 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:51.858 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:51.858 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:51.858 | .driver_specific 00:27:51.858 | .nvme_error 00:27:51.858 | .status_code 00:27:51.858 | .command_transient_transport_error' 00:27:51.858 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3909627 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3909627 ']' 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3909627 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3909627 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3909627' 00:27:52.117 killing process with pid 3909627 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3909627 00:27:52.117 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.117 00:27:52.117 Latency(us) 00:27:52.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.117 =================================================================================================================== 00:27:52.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.117 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3909627 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3910365 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3910365 /var/tmp/bperf.sock 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3910365 ']' 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.376 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:52.376 [2024-05-15 16:05:50.820682] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:52.376 [2024-05-15 16:05:50.820736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910365 ] 00:27:52.376 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:52.376 Zero copy mechanism will not be used. 00:27:52.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.376 [2024-05-15 16:05:50.891121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.634 [2024-05-15 16:05:50.967308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.199 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:53.199 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:53.199 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:53.199 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:53.457 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:53.457 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.457 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.457 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.457 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.457 16:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.715 nvme0n1 00:27:53.715 16:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:53.715 16:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.715 16:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.715 16:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.715 16:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:53.715 16:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.974 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:53.974 Zero copy mechanism will not be used. 00:27:53.974 Running I/O for 2 seconds... 00:27:53.974 [2024-05-15 16:05:52.348309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.348343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.348357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.360915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.360944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.360956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.371897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.371919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.371930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.382729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.382751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.382762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.393874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.393898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.393909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.404792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.404814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.404824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.416313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.416336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.416346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.428695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.428718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.428729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.440825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.440847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.440858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.453008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.453030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.453045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.466143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.466165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.466176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.478423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.478445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.478456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.490642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.490665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.490675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.503278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.503300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.503311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.516135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.516157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.516167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:53.974 [2024-05-15 16:05:52.527901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:53.974 [2024-05-15 16:05:52.527926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.974 [2024-05-15 16:05:52.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.539528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.539554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.539565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.551696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.551720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.551731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.563196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.563223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.563234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.575003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.575026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.575037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.586459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.586481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.586492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.599065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.599088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.599099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.611351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.611374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.611385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.622831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.622853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.234 [2024-05-15 16:05:52.622864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.234 [2024-05-15 16:05:52.634479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.234 [2024-05-15 16:05:52.634501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.634512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.646629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.646651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.646661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.659456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.659478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.659488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.670638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.670660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.670670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.682707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.682729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.682740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.694166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.694187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.694204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.705784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.705805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.705816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.717781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.717804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.717815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.729600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.729622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.729633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.741355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.741378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.741388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.753753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.753776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.753786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.765090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.765111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.765125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.777005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.777027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.777038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.235 [2024-05-15 16:05:52.788434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.235 [2024-05-15 16:05:52.788455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.235 [2024-05-15 16:05:52.788465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.799493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.799518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.799529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.810750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.810784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.821712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.821734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.821744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.833023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.833045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.833055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.844224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.844257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.855764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.855785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.855795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.867229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.867250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.867260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.879219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.879242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.879253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.890769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.890791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.890802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.903845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.903867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.903878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.916526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.916548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.916558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.930115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.930136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.930147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.941600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.941621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.941632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.955802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.955824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.955835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.966984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.967006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.967020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.978408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.978430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.978440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.987668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.987689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.987699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:52.997667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:52.997689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:52.997699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:53.008784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:53.008807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:53.008817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:53.020285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:53.020307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:53.020317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:53.032403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:53.032425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.494 [2024-05-15 16:05:53.032435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.494 [2024-05-15 16:05:53.045233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.494 [2024-05-15 16:05:53.045256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.495 [2024-05-15 16:05:53.045267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.059707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.059734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.059750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.073288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.073316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.073327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.086417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.086440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.086451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.098945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.098968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.098979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.112839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.112862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.112874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.123789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.123812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.123822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.135561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.135584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.135595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.145791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.145814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.145824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.156806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.156828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.156838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.168665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.168688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.168699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.180623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.180646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.180656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.192525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.192548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.192559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.204690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.204712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.204723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.217142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.217165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.217176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.230735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.230758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.230768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.243463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.243486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.243496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.256449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.256472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.256483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.267937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.267960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.267971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.280251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.280274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.280291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.295359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.295383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.295394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.752 [2024-05-15 16:05:53.310016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:54.752 [2024-05-15 16:05:53.310038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.752 [2024-05-15 16:05:53.310048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.321960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.321985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.321996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.333077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.333100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.333110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.344210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.344232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.344242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.354881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.354903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.354913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.365600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.365622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.365632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.376334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.376355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.376365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.387065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.387091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.387101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.397789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.397810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.397820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.408441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.408463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.408473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.419106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.419127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.419137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.429819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.429840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.429850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.440505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.440527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.440537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.451296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.451317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.451327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.462120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.462141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.462150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.472872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.472894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.472904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.483628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.483651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.483662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.494349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.494370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.494380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.505061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.505083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.505093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.515749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.515780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.526339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.526360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.526370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.537083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.537105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.010 [2024-05-15 16:05:53.537116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.010 [2024-05-15 16:05:53.547858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.010 [2024-05-15 16:05:53.547881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.011 [2024-05-15 16:05:53.547890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.011 [2024-05-15 16:05:53.558660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.011 [2024-05-15 16:05:53.558682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.011 [2024-05-15 16:05:53.558692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.011 [2024-05-15 16:05:53.569438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.011 [2024-05-15 16:05:53.569466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.011 [2024-05-15 16:05:53.569477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.580322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.580347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.580358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.591131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.591154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.591165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.601887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.601911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.601922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.612763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.612787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.612797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.623761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.623784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.623795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.634584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.634606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.634616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.645399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.645421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.645431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.656136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.656158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.656167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.666950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.666972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.666982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.677740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.677763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.677774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.688724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.688747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.688757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.699461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.699483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.699493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.710197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.710219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.710229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.720943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.720965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.720976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.731847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.731869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.731879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.742572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.742594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.742604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.753256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.753278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.753292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.764004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.764027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.764037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.774745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.774767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.774778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.785496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.785519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.270 [2024-05-15 16:05:53.785529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.270 [2024-05-15 16:05:53.796428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.270 [2024-05-15 16:05:53.796450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.271 [2024-05-15 16:05:53.796461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.271 [2024-05-15 16:05:53.807168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.271 [2024-05-15 16:05:53.807189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.271 [2024-05-15 16:05:53.807205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.271 [2024-05-15 16:05:53.817880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.271 [2024-05-15 16:05:53.817902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.271 [2024-05-15 16:05:53.817912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.271 [2024-05-15 16:05:53.828583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.271 [2024-05-15 16:05:53.828607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.271 [2024-05-15 16:05:53.828619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.839385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.839410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.839421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.850133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.850159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.850170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.860919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.860941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.860952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.871635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.871657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.871667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.882362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.882385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.882396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.893226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.893248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.893258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.903901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.903929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.903945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.914612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.914635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.914645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.925277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.925299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.925308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.935973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.935994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.936004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.946798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.946820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.946830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.957591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.528 [2024-05-15 16:05:53.957613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.528 [2024-05-15 16:05:53.957623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.528 [2024-05-15 16:05:53.968330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:53.968351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:53.968361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:53.979115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:53.979137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:53.979148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:53.989807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:53.989830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:53.989840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.000767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.000789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.000799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.011552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.011573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.011583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.022346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.022368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.022378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.033146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.033168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.033182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.043914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.043935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.043945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.054692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.054714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.054724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.065716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.065738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.065747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.076475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.076497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.076507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.529 [2024-05-15 16:05:54.087173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.529 [2024-05-15 16:05:54.087202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.529 [2024-05-15 16:05:54.087213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.098038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.098064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.098075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.108825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.108848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.108858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.119576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.119599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.119610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.130371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.130397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.130408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.141172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.141201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.141211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.151991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.152013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.152023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.163072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.163094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.163104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.182872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.182895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.182905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.201732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.201754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.201764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.214994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.215016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.215026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.226372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.226393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.226404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.237519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.237541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.237554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.248552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.248574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.248584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.259899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.259921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.259931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.271646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.271668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.271678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.282500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.282522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.294307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.294329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.294339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.787 [2024-05-15 16:05:54.314897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1064ae0) 00:27:55.787 [2024-05-15 16:05:54.314919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.787 [2024-05-15 16:05:54.314929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.787 00:27:55.787 Latency(us) 00:27:55.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.787 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:55.787 nvme0n1 : 2.01 2647.14 330.89 0.00 0.00 6041.55 1703.94 28101.84 00:27:55.787 =================================================================================================================== 00:27:55.787 Total : 2647.14 330.89 0.00 0.00 6041.55 1703.94 28101.84 00:27:55.787 0 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:56.046 | .driver_specific 00:27:56.046 | .nvme_error 00:27:56.046 | .status_code 00:27:56.046 | .command_transient_transport_error' 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3910365 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3910365 ']' 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3910365 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3910365 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3910365' 00:27:56.046 killing process with pid 3910365 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3910365 00:27:56.046 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.046 00:27:56.046 Latency(us) 00:27:56.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.046 =================================================================================================================== 00:27:56.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.046 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3910365 00:27:56.304 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:56.304 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:56.304 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:56.304 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:56.304 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:56.304 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3910978 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3910978 /var/tmp/bperf.sock 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3910978 ']' 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:56.305 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.305 [2024-05-15 16:05:54.829221] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:56.305 [2024-05-15 16:05:54.829275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910978 ] 00:27:56.305 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.606 [2024-05-15 16:05:54.899220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.606 [2024-05-15 16:05:54.971268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.205 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.205 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:57.205 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.205 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.463 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:57.463 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.463 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.463 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.463 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.463 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.722 nvme0n1 00:27:57.722 16:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:57.722 16:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.722 16:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.722 16:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.722 16:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:57.722 16:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:57.722 Running I/O for 2 seconds... 00:27:57.722 [2024-05-15 16:05:56.162704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190fe720 00:27:57.722 [2024-05-15 16:05:56.163500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.163533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.171616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f0ff8 00:27:57.722 [2024-05-15 16:05:56.172466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.172493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.180591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f57b0 00:27:57.722 [2024-05-15 16:05:56.181621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.181644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.189511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190fe2e8 00:27:57.722 [2024-05-15 16:05:56.190393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.190417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.198285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f0ff8 00:27:57.722 [2024-05-15 16:05:56.199163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.199183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.207068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f57b0 00:27:57.722 [2024-05-15 16:05:56.207982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.208002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.218597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f2d80 00:27:57.722 [2024-05-15 16:05:56.220046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.220066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.229640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f7100 00:27:57.722 [2024-05-15 16:05:56.230518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.230538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.238747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190e95a0 00:27:57.722 [2024-05-15 16:05:56.239218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.239238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.247902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190e95a0 00:27:57.722 [2024-05-15 16:05:56.248441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.248461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.256913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190e95a0 00:27:57.722 [2024-05-15 16:05:56.258257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.258276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.266241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eb328 00:27:57.722 [2024-05-15 16:05:56.266831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.266851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.275310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eb328 00:27:57.722 [2024-05-15 16:05:56.275537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.722 [2024-05-15 16:05:56.275557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.722 [2024-05-15 16:05:56.284661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eb328 00:27:57.981 [2024-05-15 16:05:56.285060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.285081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.293977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eb328 00:27:57.981 [2024-05-15 16:05:56.294454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.294477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.303056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eb328 00:27:57.981 [2024-05-15 16:05:56.303591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.303612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.312125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eb328 00:27:57.981 [2024-05-15 16:05:56.312461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.312482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.321183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eb328 00:27:57.981 [2024-05-15 16:05:56.321720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.321740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.331346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eff18 00:27:57.981 [2024-05-15 16:05:56.332326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.332347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.340261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190feb58 00:27:57.981 [2024-05-15 16:05:56.341454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.341474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.349551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190feb58 00:27:57.981 [2024-05-15 16:05:56.350309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.350329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.360071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8e88 00:27:57.981 [2024-05-15 16:05:56.360899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.360920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.369297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190e88f8 00:27:57.981 [2024-05-15 16:05:56.370331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.370350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.377889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eaef0 00:27:57.981 [2024-05-15 16:05:56.378814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.378834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.386215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190e88f8 00:27:57.981 [2024-05-15 16:05:56.387462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.387481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:57.981 [2024-05-15 16:05:56.396768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190e9e10 00:27:57.981 [2024-05-15 16:05:56.398020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.981 [2024-05-15 16:05:56.398040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.406524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ef6a8 00:27:57.982 [2024-05-15 16:05:56.406753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.406772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.415557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ef6a8 00:27:57.982 [2024-05-15 16:05:56.415856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.415875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.424940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ef6a8 00:27:57.982 [2024-05-15 16:05:56.425144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.425171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.434274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ef6a8 00:27:57.982 [2024-05-15 16:05:56.434619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.434642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.443561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ef6a8 00:27:57.982 [2024-05-15 16:05:56.443761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.443789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.452631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ef6a8 00:27:57.982 [2024-05-15 16:05:56.453338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.453358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.461861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f5be8 00:27:57.982 [2024-05-15 16:05:56.463219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.463238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.471942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eee38 00:27:57.982 [2024-05-15 16:05:56.472906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.472925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.480509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ebb98 00:27:57.982 [2024-05-15 16:05:56.481487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.481507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.489887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eaab8 00:27:57.982 [2024-05-15 16:05:56.491125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.491145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.500290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:57.982 [2024-05-15 16:05:56.501144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.501163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.508970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190eea00 00:27:57.982 [2024-05-15 16:05:56.509715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.509735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.517712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ecc78 00:27:57.982 [2024-05-15 16:05:56.518371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.518390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.526403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f20d8 00:27:57.982 [2024-05-15 16:05:56.527760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.527779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:57.982 [2024-05-15 16:05:56.538280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f57b0 00:27:57.982 [2024-05-15 16:05:56.539378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.982 [2024-05-15 16:05:56.539398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.548122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f6890 00:27:58.241 [2024-05-15 16:05:56.549093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.549117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.556861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f46d0 00:27:58.241 [2024-05-15 16:05:56.557877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.557899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.565608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f7da8 00:27:58.241 [2024-05-15 16:05:56.566556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.574350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f4f40 00:27:58.241 [2024-05-15 16:05:56.575194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.575213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.582993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ecc78 00:27:58.241 [2024-05-15 16:05:56.583990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.584011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.591706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ef270 00:27:58.241 [2024-05-15 16:05:56.592546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.592566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.600343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8e88 00:27:58.241 [2024-05-15 16:05:56.601339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.601359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.608979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190e9168 00:27:58.241 [2024-05-15 16:05:56.610106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.610127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.619968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f7da8 00:27:58.241 [2024-05-15 16:05:56.620888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.620909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.629101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8618 00:27:58.241 [2024-05-15 16:05:56.629621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.629641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.638168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8618 00:27:58.241 [2024-05-15 16:05:56.638609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.638629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.647286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8618 00:27:58.241 [2024-05-15 16:05:56.648032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.648052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.656406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.241 [2024-05-15 16:05:56.657595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.657615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.665635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8618 00:27:58.241 [2024-05-15 16:05:56.665926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.665946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.675040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8618 00:27:58.241 [2024-05-15 16:05:56.675254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.675275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.684118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8618 00:27:58.241 [2024-05-15 16:05:56.684496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.684516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.695824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190f8618 00:27:58.241 [2024-05-15 16:05:56.696822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.696842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.705238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.241 [2024-05-15 16:05:56.705953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.705973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.714432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.241 [2024-05-15 16:05:56.715149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.715169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.723394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.241 [2024-05-15 16:05:56.724897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.724916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.732530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.241 [2024-05-15 16:05:56.732884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.732904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.741801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.241 [2024-05-15 16:05:56.742185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.241 [2024-05-15 16:05:56.742209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.241 [2024-05-15 16:05:56.750878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.241 [2024-05-15 16:05:56.751094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.242 [2024-05-15 16:05:56.751113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.242 [2024-05-15 16:05:56.759915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.242 [2024-05-15 16:05:56.760156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.242 [2024-05-15 16:05:56.760176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.242 [2024-05-15 16:05:56.769032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.242 [2024-05-15 16:05:56.769273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.242 [2024-05-15 16:05:56.769294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.242 [2024-05-15 16:05:56.778123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.242 [2024-05-15 16:05:56.778392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.242 [2024-05-15 16:05:56.778412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.242 [2024-05-15 16:05:56.787171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.242 [2024-05-15 16:05:56.787431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.242 [2024-05-15 16:05:56.787451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.242 [2024-05-15 16:05:56.796254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.242 [2024-05-15 16:05:56.796580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.242 [2024-05-15 16:05:56.796601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.500 [2024-05-15 16:05:56.805631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.500 [2024-05-15 16:05:56.805892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.500 [2024-05-15 16:05:56.805916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.500 [2024-05-15 16:05:56.814879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.500 [2024-05-15 16:05:56.815126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.500 [2024-05-15 16:05:56.815149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.500 [2024-05-15 16:05:56.824031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.500 [2024-05-15 16:05:56.824277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.500 [2024-05-15 16:05:56.824298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.500 [2024-05-15 16:05:56.833338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.500 [2024-05-15 16:05:56.833589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.500 [2024-05-15 16:05:56.833610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.500 [2024-05-15 16:05:56.842587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.500 [2024-05-15 16:05:56.842833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.500 [2024-05-15 16:05:56.842853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.500 [2024-05-15 16:05:56.851648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.500 [2024-05-15 16:05:56.851887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.500 [2024-05-15 16:05:56.851907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.500 [2024-05-15 16:05:56.860732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.860972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.860992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.869843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.870101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.870122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.878915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.879156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.887969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.888212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.888232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.897048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.897290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.897310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.906207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.906475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.906495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.915297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.915538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.915557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.924375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.924614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.924633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.933685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.933944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.933964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.942838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.943079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.943098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.951869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.952109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.952128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.960963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.961204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.961240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.970109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.970374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.970393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.979334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.979579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.979600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.988418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.988663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.988683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:56.997516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:56.997775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:56.997799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:57.006547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:57.006787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:57.006807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:57.015629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:57.015873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:57.015892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:57.024674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:57.024916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:57.024936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:57.033772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:57.034032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.501 [2024-05-15 16:05:57.034052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.501 [2024-05-15 16:05:57.042973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.501 [2024-05-15 16:05:57.043217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.502 [2024-05-15 16:05:57.043237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.502 [2024-05-15 16:05:57.052045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.502 [2024-05-15 16:05:57.052286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.502 [2024-05-15 16:05:57.052305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.502 [2024-05-15 16:05:57.061361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.502 [2024-05-15 16:05:57.061609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.502 [2024-05-15 16:05:57.061632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.760 [2024-05-15 16:05:57.070723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.760 [2024-05-15 16:05:57.070990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.760 [2024-05-15 16:05:57.071014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.760 [2024-05-15 16:05:57.079844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.760 [2024-05-15 16:05:57.080094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.760 [2024-05-15 16:05:57.080115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.760 [2024-05-15 16:05:57.088977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.760 [2024-05-15 16:05:57.089240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.760 [2024-05-15 16:05:57.089260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.760 [2024-05-15 16:05:57.098032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.760 [2024-05-15 16:05:57.098272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.760 [2024-05-15 16:05:57.098292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.760 [2024-05-15 16:05:57.107123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.107375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.107396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.116250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.116510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.116529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.125356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.125648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.125667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.134509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.134752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.134772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.143625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.143865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.143884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.152733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.152989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.153008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.162042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.162304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.162324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.171201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.171442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.171461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.180402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.180647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.180667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.189735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.189978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.189999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.198798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.199041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.199061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.207920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.208162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.208182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.217053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.217315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.217346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.226124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.226383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.226403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.235356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.235601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.235624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.244653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.244909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.244929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.253890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.254149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.254169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.263207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.263468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.263489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.272384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.272646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.272665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.281580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.281825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.281845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.290637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.290903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.290922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.299707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.299946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.299965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.308829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.309086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.309105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.761 [2024-05-15 16:05:57.317914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:58.761 [2024-05-15 16:05:57.318163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.761 [2024-05-15 16:05:57.318183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.327380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.327628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.327652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.336525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.336790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.336811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.345712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.345955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.345977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.354945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.355185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.355208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.364018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.364259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.364279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.373131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.373398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.373417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.382224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.382465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.382485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.391260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.391499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.391519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.400342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.400599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.400619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.409520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.409778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.409797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.418643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.418884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.418904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.427695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.427935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.020 [2024-05-15 16:05:57.427954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.020 [2024-05-15 16:05:57.436807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.020 [2024-05-15 16:05:57.437065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.437092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.446128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.446378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.446397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.455184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.455429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.455448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.464338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.464595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.464614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.473445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.473680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.473703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.482503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.482743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.482764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.491548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.491788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.491807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.500581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.500840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.500859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.509668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.509908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.509928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.518701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.518941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.518961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.527756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.527995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.528014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.536848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.537106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.537126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.545954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.546198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.546217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.555008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.555251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.555271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.564065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.564311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.564330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.573182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.021 [2024-05-15 16:05:57.573459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.021 [2024-05-15 16:05:57.573478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.021 [2024-05-15 16:05:57.582515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.279 [2024-05-15 16:05:57.582772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.279 [2024-05-15 16:05:57.582794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.279 [2024-05-15 16:05:57.591793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.279 [2024-05-15 16:05:57.592041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.279 [2024-05-15 16:05:57.592065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.279 [2024-05-15 16:05:57.600926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.279 [2024-05-15 16:05:57.601194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.601215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.610003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.610245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.610265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.619040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.619279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.619299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.628125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.628387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.628408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.637204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.637465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.637485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.646361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.646602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.646622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.655328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.655568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.655587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.664407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.664663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.664684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.673498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.673739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.673759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.682536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.682776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.682796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.691574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.691815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.691842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.700946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.701186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.701210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.710229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.710500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.710526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.719342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.719584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.719603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.728414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.728654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.728673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.737467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.737725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.737745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.746657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.746896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.746916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.755698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.755937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.755956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.764746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.764986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.765005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.773842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.774101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.774121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.782933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.783171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.783195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.791989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.792234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.792254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.801098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.801362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.801383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.810173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.810440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.810460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.819275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.819517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.819536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.828323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.828558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.828577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.280 [2024-05-15 16:05:57.837396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.280 [2024-05-15 16:05:57.837652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.280 [2024-05-15 16:05:57.837673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.846897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.847145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.847171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.856012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.856257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.856279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.865145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.865411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.865432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.874259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.874527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.874548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.883318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.883560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.883581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.892399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.892638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.892658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.901500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.901754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.901774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.910622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.910863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.910883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.919675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.919915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.919935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.928683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.928923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.928943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.937812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.938060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.938080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.946945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.947184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.539 [2024-05-15 16:05:57.947210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.539 [2024-05-15 16:05:57.956198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.539 [2024-05-15 16:05:57.956440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:57.956461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:57.965315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:57.965572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:57.965592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:57.974430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:57.974664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:57.974684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:57.983565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:57.983801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:57.983822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:57.992626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:57.992866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:57.992885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.001706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.001964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.001985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.010794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.011034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.011054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.019839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.020078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.020098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.028922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.029168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.029188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.038058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.038322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.038353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.047176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.047422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.047442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.056220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.056460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.056479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.065349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.065610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.065630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.074433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.074673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.074692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.083485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.083719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.083739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.540 [2024-05-15 16:05:58.092528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.540 [2024-05-15 16:05:58.092762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.540 [2024-05-15 16:05:58.092782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.798 [2024-05-15 16:05:58.101876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.798 [2024-05-15 16:05:58.102122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.798 [2024-05-15 16:05:58.102145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.798 [2024-05-15 16:05:58.111180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.798 [2024-05-15 16:05:58.111437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.798 [2024-05-15 16:05:58.111459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.798 [2024-05-15 16:05:58.120270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.798 [2024-05-15 16:05:58.120516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.798 [2024-05-15 16:05:58.120537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.798 [2024-05-15 16:05:58.129375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dba0) with pdu=0x2000190ed0b0 00:27:59.798 [2024-05-15 16:05:58.129634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.798 [2024-05-15 16:05:58.129654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.798 00:27:59.798 Latency(us) 00:27:59.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.798 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.798 nvme0n1 : 2.00 27467.12 107.29 0.00 0.00 4651.96 2411.72 27682.41 00:27:59.798 =================================================================================================================== 00:27:59.798 Total : 27467.12 107.29 0.00 0.00 4651.96 2411.72 27682.41 00:27:59.798 0 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:59.798 | .driver_specific 00:27:59.798 | .nvme_error 00:27:59.798 | .status_code 00:27:59.798 | .command_transient_transport_error' 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3910978 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3910978 ']' 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3910978 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:59.798 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3910978 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3910978' 00:28:00.057 killing process with pid 3910978 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3910978 00:28:00.057 Received shutdown signal, test time was about 2.000000 seconds 00:28:00.057 00:28:00.057 Latency(us) 00:28:00.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.057 =================================================================================================================== 00:28:00.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3910978 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3911621 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3911621 /var/tmp/bperf.sock 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3911621 ']' 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:00.057 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.315 [2024-05-15 16:05:58.646686] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:00.315 [2024-05-15 16:05:58.646740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911621 ] 00:28:00.315 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:00.315 Zero copy mechanism will not be used. 00:28:00.315 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.315 [2024-05-15 16:05:58.716462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.315 [2024-05-15 16:05:58.782157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.879 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:00.879 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:00.879 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:00.879 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.136 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:01.136 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.136 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.136 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.136 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.136 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.394 nvme0n1 00:28:01.394 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:01.394 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.394 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.394 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.394 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:01.394 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:01.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:01.653 Zero copy mechanism will not be used. 00:28:01.653 Running I/O for 2 seconds... 00:28:01.653 [2024-05-15 16:06:00.013164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.013824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.013858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.034487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.035032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.035062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.054606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.055279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.055303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.075444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.076007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.076029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.097037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.097688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.097712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.116663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.117315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.117337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.136165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.136892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.136919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.157678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.158398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.158419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.179860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.180436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.180458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.653 [2024-05-15 16:06:00.200964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.653 [2024-05-15 16:06:00.201550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.653 [2024-05-15 16:06:00.201572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.912 [2024-05-15 16:06:00.220797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.912 [2024-05-15 16:06:00.221251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.912 [2024-05-15 16:06:00.221276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.912 [2024-05-15 16:06:00.240143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.912 [2024-05-15 16:06:00.240790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.912 [2024-05-15 16:06:00.240813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.259789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.260395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.260417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.280641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.281318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.281340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.301981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.302592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.302615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.322628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.323132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.323153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.342960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.343537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.343560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.364343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.365019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.365041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.386729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.387208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.387229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.408225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.408774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.408795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.429376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.429942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.429962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.450229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.450745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.450765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.913 [2024-05-15 16:06:00.470962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:01.913 [2024-05-15 16:06:00.471413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.913 [2024-05-15 16:06:00.471436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.492145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.492677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.492703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.511647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.512298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.512321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.531685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.532353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.532375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.553533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.554039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.554063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.574843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.575717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.575739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.595371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.595960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.595982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.616312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.616791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.616812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.637609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.638182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.638208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.655940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.656471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.656492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.675325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.675695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.675721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.696673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.697242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.697264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.172 [2024-05-15 16:06:00.718156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.172 [2024-05-15 16:06:00.718935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.172 [2024-05-15 16:06:00.718956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.739199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.739784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.739809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.759870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.760308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.760331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.780441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.780858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.780879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.800058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.800487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.800509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.818563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.819217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.819240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.838549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.839453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.839474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.858409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.858974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.858995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.875136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.875709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.875729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.895921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.896470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.896492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.917315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.917719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.917740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.937656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.938471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.938492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.957979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.958563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.958586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.431 [2024-05-15 16:06:00.979229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.431 [2024-05-15 16:06:00.979732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.431 [2024-05-15 16:06:00.979753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.000225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.001097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.001121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.018016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.018423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.018449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.035217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.035516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.035537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.050423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.050948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.050969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.064433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.064802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.064823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.077426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.077825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.077846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.093337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.094003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.094023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.109123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.109534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.109555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.124251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.124722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.124743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.140814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.141287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.141308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.157766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.158171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.158238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.173641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.174239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.174259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.190685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.191089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.191111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.206074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.206627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.206648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.223503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.224022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.224042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.690 [2024-05-15 16:06:01.241306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.690 [2024-05-15 16:06:01.241778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.690 [2024-05-15 16:06:01.241799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.257930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.258734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.258758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.274731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.275266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.275288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.290611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.291233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.291255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.307536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.307947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.307968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.323505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.323980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.324002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.340306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.340689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.340710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.356557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.357056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.357077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.373624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.374093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.374115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.387942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.388555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.949 [2024-05-15 16:06:01.405116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.949 [2024-05-15 16:06:01.405693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.949 [2024-05-15 16:06:01.405714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.950 [2024-05-15 16:06:01.423048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.950 [2024-05-15 16:06:01.423598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.950 [2024-05-15 16:06:01.423619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.950 [2024-05-15 16:06:01.438441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.950 [2024-05-15 16:06:01.438840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.950 [2024-05-15 16:06:01.438865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.950 [2024-05-15 16:06:01.454254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.950 [2024-05-15 16:06:01.454716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.950 [2024-05-15 16:06:01.454738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.950 [2024-05-15 16:06:01.471143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.950 [2024-05-15 16:06:01.471639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.950 [2024-05-15 16:06:01.471660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.950 [2024-05-15 16:06:01.488115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.950 [2024-05-15 16:06:01.488724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.950 [2024-05-15 16:06:01.488746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.950 [2024-05-15 16:06:01.504900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:02.950 [2024-05-15 16:06:01.505434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.950 [2024-05-15 16:06:01.505455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.520455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.520854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.520879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.536608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.537202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.537224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.554316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.554720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.554741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.572903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.573210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.573230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.590594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.591131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.591153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.608079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.608487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.608508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.625300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.625699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.625721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.641413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.641827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.641848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.657423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.658053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.675016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.675484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.675505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.692584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.693210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.693233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.709140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.709931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.709953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.726800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.727230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.727252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.742978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.743462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.743484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.209 [2024-05-15 16:06:01.759404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.209 [2024-05-15 16:06:01.760007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.209 [2024-05-15 16:06:01.760028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.775497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.776091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.776116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.791383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.791971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.791994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.807434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.808170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.808198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.825744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.826298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.826319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.842845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.843252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.843274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.858624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.859016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.859037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.874981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.875514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.875542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.891595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.892075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.892097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.908607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.909216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.909237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.923854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.924343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.924364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.939167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.939811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.939832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.956281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.956994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.957015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.974452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.974819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.974839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.469 [2024-05-15 16:06:01.990839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c7dee0) with pdu=0x2000190fef90 00:28:03.469 [2024-05-15 16:06:01.991366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.469 [2024-05-15 16:06:01.991387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.469 00:28:03.469 Latency(us) 00:28:03.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.469 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:03.469 nvme0n1 : 2.01 1693.69 211.71 0.00 0.00 9425.03 2686.98 22649.24 00:28:03.469 =================================================================================================================== 00:28:03.469 Total : 1693.69 211.71 0.00 0.00 9425.03 2686.98 22649.24 00:28:03.469 0 00:28:03.469 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:03.469 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:03.469 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:03.469 | .driver_specific 00:28:03.469 | .nvme_error 00:28:03.469 | .status_code 00:28:03.469 | .command_transient_transport_error' 00:28:03.469 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:03.728 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 109 > 0 )) 00:28:03.728 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3911621 00:28:03.728 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3911621 ']' 00:28:03.728 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3911621 00:28:03.728 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:03.728 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.728 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3911621 00:28:03.729 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:03.729 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:03.729 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3911621' 00:28:03.729 killing process with pid 3911621 00:28:03.729 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3911621 00:28:03.729 Received shutdown signal, test time was about 2.000000 seconds 00:28:03.729 00:28:03.729 Latency(us) 00:28:03.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.729 =================================================================================================================== 00:28:03.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:03.729 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3911621 00:28:03.987 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3909517 00:28:03.987 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3909517 ']' 00:28:03.987 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3909517 00:28:03.987 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:03.988 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.988 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3909517 00:28:03.988 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:03.988 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:03.988 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3909517' 00:28:03.988 killing process with pid 3909517 00:28:03.988 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3909517 00:28:03.988 [2024-05-15 16:06:02.518941] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:03.988 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3909517 00:28:04.246 00:28:04.246 real 0m16.820s 00:28:04.246 user 0m31.960s 00:28:04.246 sys 0m4.646s 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:04.246 ************************************ 00:28:04.246 END TEST nvmf_digest_error 00:28:04.246 ************************************ 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.246 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.246 rmmod nvme_tcp 00:28:04.246 rmmod nvme_fabrics 00:28:04.246 rmmod nvme_keyring 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3909517 ']' 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3909517 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3909517 ']' 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3909517 00:28:04.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3909517) - No such process 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3909517 is not found' 00:28:04.505 Process with pid 3909517 is not found 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.505 16:06:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.408 16:06:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.408 00:28:06.408 real 0m43.247s 00:28:06.408 user 1m6.908s 00:28:06.408 sys 0m14.428s 00:28:06.408 16:06:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:06.408 16:06:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:06.408 ************************************ 00:28:06.408 END TEST nvmf_digest 00:28:06.408 ************************************ 00:28:06.408 16:06:04 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:06.408 16:06:04 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:06.408 16:06:04 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:06.408 16:06:04 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:06.408 16:06:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:06.408 16:06:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:06.408 16:06:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.667 ************************************ 00:28:06.667 START TEST nvmf_bdevperf 00:28:06.667 ************************************ 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:06.667 * Looking for test storage... 00:28:06.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.667 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.668 16:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:13.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:13.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:13.230 Found net devices under 0000:af:00.0: cvl_0_0 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:13.230 Found net devices under 0000:af:00.1: cvl_0_1 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:13.230 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:13.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:28:13.489 00:28:13.489 --- 10.0.0.2 ping statistics --- 00:28:13.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.489 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:28:13.489 00:28:13.489 --- 10.0.0.1 ping statistics --- 00:28:13.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.489 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3916582 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3916582 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3916582 ']' 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:13.489 16:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:13.489 [2024-05-15 16:06:11.970681] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:13.489 [2024-05-15 16:06:11.970727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.489 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.489 [2024-05-15 16:06:12.041963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:13.806 [2024-05-15 16:06:12.118696] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.806 [2024-05-15 16:06:12.118732] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.806 [2024-05-15 16:06:12.118741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.806 [2024-05-15 16:06:12.118750] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.806 [2024-05-15 16:06:12.118774] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.806 [2024-05-15 16:06:12.118819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.806 [2024-05-15 16:06:12.118902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.806 [2024-05-15 16:06:12.118904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.371 [2024-05-15 16:06:12.817949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.371 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.372 Malloc0 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.372 [2024-05-15 16:06:12.890866] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:14.372 [2024-05-15 16:06:12.891112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.372 { 00:28:14.372 "params": { 00:28:14.372 "name": "Nvme$subsystem", 00:28:14.372 "trtype": "$TEST_TRANSPORT", 00:28:14.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.372 "adrfam": "ipv4", 00:28:14.372 "trsvcid": "$NVMF_PORT", 00:28:14.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.372 "hdgst": ${hdgst:-false}, 00:28:14.372 "ddgst": ${ddgst:-false} 00:28:14.372 }, 00:28:14.372 "method": "bdev_nvme_attach_controller" 00:28:14.372 } 00:28:14.372 EOF 00:28:14.372 )") 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:14.372 16:06:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:14.372 "params": { 00:28:14.372 "name": "Nvme1", 00:28:14.372 "trtype": "tcp", 00:28:14.372 "traddr": "10.0.0.2", 00:28:14.372 "adrfam": "ipv4", 00:28:14.372 "trsvcid": "4420", 00:28:14.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.372 "hdgst": false, 00:28:14.372 "ddgst": false 00:28:14.372 }, 00:28:14.372 "method": "bdev_nvme_attach_controller" 00:28:14.372 }' 00:28:14.629 [2024-05-15 16:06:12.941688] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:14.629 [2024-05-15 16:06:12.941734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916630 ] 00:28:14.629 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.629 [2024-05-15 16:06:13.011135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.629 [2024-05-15 16:06:13.082229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.886 Running I/O for 1 seconds... 00:28:15.818 00:28:15.818 Latency(us) 00:28:15.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.819 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:15.819 Verification LBA range: start 0x0 length 0x4000 00:28:15.819 Nvme1n1 : 1.01 11359.41 44.37 0.00 0.00 11226.65 2477.26 27682.41 00:28:15.819 =================================================================================================================== 00:28:15.819 Total : 11359.41 44.37 0.00 0.00 11226.65 2477.26 27682.41 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3916923 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:16.076 { 00:28:16.076 "params": { 00:28:16.076 "name": "Nvme$subsystem", 00:28:16.076 "trtype": "$TEST_TRANSPORT", 00:28:16.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.076 "adrfam": "ipv4", 00:28:16.076 "trsvcid": "$NVMF_PORT", 00:28:16.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.076 "hdgst": ${hdgst:-false}, 00:28:16.076 "ddgst": ${ddgst:-false} 00:28:16.076 }, 00:28:16.076 "method": "bdev_nvme_attach_controller" 00:28:16.076 } 00:28:16.076 EOF 00:28:16.076 )") 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:16.076 16:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:16.076 "params": { 00:28:16.076 "name": "Nvme1", 00:28:16.076 "trtype": "tcp", 00:28:16.076 "traddr": "10.0.0.2", 00:28:16.076 "adrfam": "ipv4", 00:28:16.076 "trsvcid": "4420", 00:28:16.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:16.076 "hdgst": false, 00:28:16.076 "ddgst": false 00:28:16.076 }, 00:28:16.076 "method": "bdev_nvme_attach_controller" 00:28:16.076 }' 00:28:16.076 [2024-05-15 16:06:14.626983] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:16.076 [2024-05-15 16:06:14.627035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916923 ] 00:28:16.333 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.333 [2024-05-15 16:06:14.697056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.333 [2024-05-15 16:06:14.766456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.606 Running I/O for 15 seconds... 00:28:19.132 16:06:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3916582 00:28:19.132 16:06:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:19.132 [2024-05-15 16:06:17.596469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.132 [2024-05-15 16:06:17.596526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.132 [2024-05-15 16:06:17.596551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.132 [2024-05-15 16:06:17.596575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.132 [2024-05-15 16:06:17.596597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.132 [2024-05-15 16:06:17.596622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.132 [2024-05-15 16:06:17.596649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.132 [2024-05-15 16:06:17.596669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.132 [2024-05-15 16:06:17.596678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.596981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.596991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.133 [2024-05-15 16:06:17.597499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.133 [2024-05-15 16:06:17.597508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.597985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.597995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.134 [2024-05-15 16:06:17.598141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.134 [2024-05-15 16:06:17.598326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.134 [2024-05-15 16:06:17.598337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.135 [2024-05-15 16:06:17.598365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.598989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.598998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.599009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.599017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.599028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.599037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.599048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.135 [2024-05-15 16:06:17.599057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.599067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890610 is same with the state(5) to be set 00:28:19.135 [2024-05-15 16:06:17.599079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.135 [2024-05-15 16:06:17.599086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.135 [2024-05-15 16:06:17.599094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91472 len:8 PRP1 0x0 PRP2 0x0 00:28:19.135 [2024-05-15 16:06:17.599107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.135 [2024-05-15 16:06:17.599152] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x890610 was disconnected and freed. reset controller. 00:28:19.135 [2024-05-15 16:06:17.601846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.135 [2024-05-15 16:06:17.601895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.135 [2024-05-15 16:06:17.602723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.135 [2024-05-15 16:06:17.603128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.135 [2024-05-15 16:06:17.603141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.136 [2024-05-15 16:06:17.603151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.136 [2024-05-15 16:06:17.603333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.136 [2024-05-15 16:06:17.603507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.136 [2024-05-15 16:06:17.603518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.136 [2024-05-15 16:06:17.603528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.136 [2024-05-15 16:06:17.606229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.136 [2024-05-15 16:06:17.614947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.136 [2024-05-15 16:06:17.615597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.616150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.616206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.136 [2024-05-15 16:06:17.616240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.136 [2024-05-15 16:06:17.616835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.136 [2024-05-15 16:06:17.617286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.136 [2024-05-15 16:06:17.617297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.136 [2024-05-15 16:06:17.617306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.136 [2024-05-15 16:06:17.619859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.136 [2024-05-15 16:06:17.627650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.136 [2024-05-15 16:06:17.628298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.628789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.628829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.136 [2024-05-15 16:06:17.628861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.136 [2024-05-15 16:06:17.629343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.136 [2024-05-15 16:06:17.629511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.136 [2024-05-15 16:06:17.629521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.136 [2024-05-15 16:06:17.629530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.136 [2024-05-15 16:06:17.632081] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.136 [2024-05-15 16:06:17.640418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.136 [2024-05-15 16:06:17.641079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.641594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.641607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.136 [2024-05-15 16:06:17.641617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.136 [2024-05-15 16:06:17.641784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.136 [2024-05-15 16:06:17.641953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.136 [2024-05-15 16:06:17.641964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.136 [2024-05-15 16:06:17.641972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.136 [2024-05-15 16:06:17.644529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.136 [2024-05-15 16:06:17.653225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.136 [2024-05-15 16:06:17.653862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.654402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.654445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.136 [2024-05-15 16:06:17.654477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.136 [2024-05-15 16:06:17.655072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.136 [2024-05-15 16:06:17.655263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.136 [2024-05-15 16:06:17.655274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.136 [2024-05-15 16:06:17.655282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.136 [2024-05-15 16:06:17.659007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.136 [2024-05-15 16:06:17.666820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.136 [2024-05-15 16:06:17.667465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.668013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.668054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.136 [2024-05-15 16:06:17.668087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.136 [2024-05-15 16:06:17.668591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.136 [2024-05-15 16:06:17.668760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.136 [2024-05-15 16:06:17.668770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.136 [2024-05-15 16:06:17.668779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.136 [2024-05-15 16:06:17.671453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.136 [2024-05-15 16:06:17.679529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.136 [2024-05-15 16:06:17.680165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.680721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-05-15 16:06:17.680761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.136 [2024-05-15 16:06:17.680793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.136 [2024-05-15 16:06:17.681267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.136 [2024-05-15 16:06:17.681435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.136 [2024-05-15 16:06:17.681451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.136 [2024-05-15 16:06:17.681460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.136 [2024-05-15 16:06:17.683995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.136 [2024-05-15 16:06:17.692479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.395 [2024-05-15 16:06:17.693145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.693647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.693722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.395 [2024-05-15 16:06:17.693775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.395 [2024-05-15 16:06:17.694242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.395 [2024-05-15 16:06:17.694427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.395 [2024-05-15 16:06:17.694439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.395 [2024-05-15 16:06:17.694449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.395 [2024-05-15 16:06:17.697010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.395 [2024-05-15 16:06:17.705327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.395 [2024-05-15 16:06:17.706029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.706556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.706599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.395 [2024-05-15 16:06:17.706633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.395 [2024-05-15 16:06:17.707243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.395 [2024-05-15 16:06:17.707590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.395 [2024-05-15 16:06:17.707600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.395 [2024-05-15 16:06:17.707609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.395 [2024-05-15 16:06:17.710246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.395 [2024-05-15 16:06:17.718094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.395 [2024-05-15 16:06:17.718765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.719279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.719311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.395 [2024-05-15 16:06:17.719321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.395 [2024-05-15 16:06:17.719481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.395 [2024-05-15 16:06:17.719639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.395 [2024-05-15 16:06:17.719649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.395 [2024-05-15 16:06:17.719661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.395 [2024-05-15 16:06:17.722280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.395 [2024-05-15 16:06:17.730878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.395 [2024-05-15 16:06:17.731456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.731857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.731897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.395 [2024-05-15 16:06:17.731930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.395 [2024-05-15 16:06:17.732387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.395 [2024-05-15 16:06:17.732555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.395 [2024-05-15 16:06:17.732566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.395 [2024-05-15 16:06:17.732574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.395 [2024-05-15 16:06:17.735225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.395 [2024-05-15 16:06:17.743693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.395 [2024-05-15 16:06:17.744363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.744819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.744859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.395 [2024-05-15 16:06:17.744891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.395 [2024-05-15 16:06:17.745413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.395 [2024-05-15 16:06:17.745582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.395 [2024-05-15 16:06:17.745592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.395 [2024-05-15 16:06:17.745601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.395 [2024-05-15 16:06:17.748160] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.395 [2024-05-15 16:06:17.756480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.395 [2024-05-15 16:06:17.757090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.757547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.757589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.395 [2024-05-15 16:06:17.757622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.395 [2024-05-15 16:06:17.758229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.395 [2024-05-15 16:06:17.758652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.395 [2024-05-15 16:06:17.758663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.395 [2024-05-15 16:06:17.758671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.395 [2024-05-15 16:06:17.761252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.395 [2024-05-15 16:06:17.769329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.395 [2024-05-15 16:06:17.769954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.770413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.395 [2024-05-15 16:06:17.770425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.395 [2024-05-15 16:06:17.770434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.395 [2024-05-15 16:06:17.770603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.395 [2024-05-15 16:06:17.770774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.395 [2024-05-15 16:06:17.770784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.395 [2024-05-15 16:06:17.770793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.395 [2024-05-15 16:06:17.773353] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.395 [2024-05-15 16:06:17.782383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.783026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.783547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.783589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.783620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.783792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.783964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.783975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.783984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.786692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.795160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.795851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.796347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.796390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.796423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.796670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.796838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.796848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.796857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.800503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.808653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.809326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.809835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.809875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.809907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.810503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.810672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.810682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.810691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.813263] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.821386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.821918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.822414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.822457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.822488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.822697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.822865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.822876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.822885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.825447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.834186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.834905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.835449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.835490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.835522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.836117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.836683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.836694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.836703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.839268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.846994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.847641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.848068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.848081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.848090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.848271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.848447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.848461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.848472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.851178] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.859969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.860632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.861212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.861264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.861304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.861914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.862222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.862239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.862251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.864946] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.872965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.873543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.873976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.873989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.873999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.874171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.874348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.874359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.874368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.877070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.885974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.886622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.887072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.887121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.887155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.887709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.887957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.887973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.396 [2024-05-15 16:06:17.887985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.396 [2024-05-15 16:06:17.891785] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.396 [2024-05-15 16:06:17.899343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.396 [2024-05-15 16:06:17.899931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.900375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.396 [2024-05-15 16:06:17.900419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.396 [2024-05-15 16:06:17.900452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.396 [2024-05-15 16:06:17.901051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.396 [2024-05-15 16:06:17.901256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.396 [2024-05-15 16:06:17.901266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.397 [2024-05-15 16:06:17.901276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.397 [2024-05-15 16:06:17.903988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.397 [2024-05-15 16:06:17.912173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.397 [2024-05-15 16:06:17.912818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.913329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.913374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.397 [2024-05-15 16:06:17.913407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.397 [2024-05-15 16:06:17.913932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.397 [2024-05-15 16:06:17.914100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.397 [2024-05-15 16:06:17.914111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.397 [2024-05-15 16:06:17.914120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.397 [2024-05-15 16:06:17.916731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.397 [2024-05-15 16:06:17.925041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.397 [2024-05-15 16:06:17.925627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.926022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.926062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.397 [2024-05-15 16:06:17.926102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.397 [2024-05-15 16:06:17.926712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.397 [2024-05-15 16:06:17.927061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.397 [2024-05-15 16:06:17.927072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.397 [2024-05-15 16:06:17.927080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.397 [2024-05-15 16:06:17.929651] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.397 [2024-05-15 16:06:17.937845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.397 [2024-05-15 16:06:17.938426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.938822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.938868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.397 [2024-05-15 16:06:17.938878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.397 [2024-05-15 16:06:17.939045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.397 [2024-05-15 16:06:17.939218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.397 [2024-05-15 16:06:17.939229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.397 [2024-05-15 16:06:17.939238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.397 [2024-05-15 16:06:17.941800] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.397 [2024-05-15 16:06:17.950633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.397 [2024-05-15 16:06:17.951324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.951782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.397 [2024-05-15 16:06:17.951821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.397 [2024-05-15 16:06:17.951854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.397 [2024-05-15 16:06:17.952118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.397 [2024-05-15 16:06:17.952302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.397 [2024-05-15 16:06:17.952314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.397 [2024-05-15 16:06:17.952323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.397 [2024-05-15 16:06:17.955063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:17.963659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:17.964271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:17.964665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:17.964679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.656 [2024-05-15 16:06:17.964689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.656 [2024-05-15 16:06:17.964861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.656 [2024-05-15 16:06:17.965029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.656 [2024-05-15 16:06:17.965040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.656 [2024-05-15 16:06:17.965049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.656 [2024-05-15 16:06:17.967697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:17.976567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:17.977180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:17.977643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:17.977683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.656 [2024-05-15 16:06:17.977716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.656 [2024-05-15 16:06:17.978143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.656 [2024-05-15 16:06:17.978317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.656 [2024-05-15 16:06:17.978328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.656 [2024-05-15 16:06:17.978337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.656 [2024-05-15 16:06:17.981954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:17.990027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:17.990638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:17.991077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:17.991118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.656 [2024-05-15 16:06:17.991150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.656 [2024-05-15 16:06:17.991782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.656 [2024-05-15 16:06:17.992159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.656 [2024-05-15 16:06:17.992170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.656 [2024-05-15 16:06:17.992179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.656 [2024-05-15 16:06:17.994784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:18.002848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:18.003441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.003921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.003961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.656 [2024-05-15 16:06:18.004002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.656 [2024-05-15 16:06:18.004169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.656 [2024-05-15 16:06:18.004345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.656 [2024-05-15 16:06:18.004357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.656 [2024-05-15 16:06:18.004366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.656 [2024-05-15 16:06:18.006981] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:18.015624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:18.016287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.016714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.016754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.656 [2024-05-15 16:06:18.016787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.656 [2024-05-15 16:06:18.017269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.656 [2024-05-15 16:06:18.017437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.656 [2024-05-15 16:06:18.017447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.656 [2024-05-15 16:06:18.017456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.656 [2024-05-15 16:06:18.020075] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:18.028426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:18.029120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.029663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.029704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.656 [2024-05-15 16:06:18.029736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.656 [2024-05-15 16:06:18.030213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.656 [2024-05-15 16:06:18.030381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.656 [2024-05-15 16:06:18.030391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.656 [2024-05-15 16:06:18.030400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.656 [2024-05-15 16:06:18.032956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:18.041272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:18.041951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.042463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.042478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.656 [2024-05-15 16:06:18.042487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.656 [2024-05-15 16:06:18.042655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.656 [2024-05-15 16:06:18.042822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.656 [2024-05-15 16:06:18.042836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.656 [2024-05-15 16:06:18.042845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.656 [2024-05-15 16:06:18.045408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.656 [2024-05-15 16:06:18.054135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.656 [2024-05-15 16:06:18.054757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.055257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-05-15 16:06:18.055299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.055330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.055926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.056344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.056355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.056364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.058998] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.066990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.067558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.068061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.068102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.068134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.068741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.069352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.069385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.069397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.073173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.080596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.081232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.081577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.081589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.081598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.081765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.081932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.081943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.081956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.084524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.093398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.094550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.094960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.095006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.095040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.095531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.095700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.095711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.095720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.098361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.106340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.106977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.107438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.107451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.107461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.107634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.107805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.107815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.107824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.110520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.119314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.120014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.120426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.120440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.120450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.120622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.120793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.120804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.120813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.123535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.132288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.132971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.133490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.133505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.133514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.133688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.133861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.133871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.133880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.136578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.145160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.145679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.146265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.146308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.146340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.146934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.147553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.147564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.147573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.150245] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.157985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.158605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.159104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.159144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.159176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.159685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.159853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.159863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.159872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.162520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.657 [2024-05-15 16:06:18.170874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.657 [2024-05-15 16:06:18.171437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.171859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-05-15 16:06:18.171899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.657 [2024-05-15 16:06:18.171931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.657 [2024-05-15 16:06:18.172453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.657 [2024-05-15 16:06:18.172621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.657 [2024-05-15 16:06:18.172631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.657 [2024-05-15 16:06:18.172640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.657 [2024-05-15 16:06:18.175266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.658 [2024-05-15 16:06:18.183720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.658 [2024-05-15 16:06:18.184316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-05-15 16:06:18.184664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-05-15 16:06:18.184704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.658 [2024-05-15 16:06:18.184736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.658 [2024-05-15 16:06:18.185342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.658 [2024-05-15 16:06:18.185748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.658 [2024-05-15 16:06:18.185758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.658 [2024-05-15 16:06:18.185767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.658 [2024-05-15 16:06:18.188387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.658 [2024-05-15 16:06:18.196602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.658 [2024-05-15 16:06:18.197202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-05-15 16:06:18.197695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-05-15 16:06:18.197736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.658 [2024-05-15 16:06:18.197768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.658 [2024-05-15 16:06:18.198039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.658 [2024-05-15 16:06:18.198213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.658 [2024-05-15 16:06:18.198224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.658 [2024-05-15 16:06:18.198233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.658 [2024-05-15 16:06:18.200878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.658 [2024-05-15 16:06:18.209466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.658 [2024-05-15 16:06:18.210041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-05-15 16:06:18.210479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-05-15 16:06:18.210492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.658 [2024-05-15 16:06:18.210501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.658 [2024-05-15 16:06:18.210668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.658 [2024-05-15 16:06:18.210835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.658 [2024-05-15 16:06:18.210846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.658 [2024-05-15 16:06:18.210854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.658 [2024-05-15 16:06:18.213438] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.222284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.222873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.223260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.223282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.917 [2024-05-15 16:06:18.223293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.917 [2024-05-15 16:06:18.223476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.917 [2024-05-15 16:06:18.223653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.917 [2024-05-15 16:06:18.223664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.917 [2024-05-15 16:06:18.223673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.917 [2024-05-15 16:06:18.226281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.235015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.235556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.236003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.236044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.917 [2024-05-15 16:06:18.236076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.917 [2024-05-15 16:06:18.236548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.917 [2024-05-15 16:06:18.236716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.917 [2024-05-15 16:06:18.236727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.917 [2024-05-15 16:06:18.236736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.917 [2024-05-15 16:06:18.239308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.247766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.248363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.248756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.248797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.917 [2024-05-15 16:06:18.248837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.917 [2024-05-15 16:06:18.249456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.917 [2024-05-15 16:06:18.249907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.917 [2024-05-15 16:06:18.249920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.917 [2024-05-15 16:06:18.249931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.917 [2024-05-15 16:06:18.252504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.260520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.261149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.261550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.261592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.917 [2024-05-15 16:06:18.261624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.917 [2024-05-15 16:06:18.261820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.917 [2024-05-15 16:06:18.261987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.917 [2024-05-15 16:06:18.261997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.917 [2024-05-15 16:06:18.262006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.917 [2024-05-15 16:06:18.265728] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.273845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.274482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.274876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.274928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.917 [2024-05-15 16:06:18.274938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.917 [2024-05-15 16:06:18.275105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.917 [2024-05-15 16:06:18.275277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.917 [2024-05-15 16:06:18.275288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.917 [2024-05-15 16:06:18.275297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.917 [2024-05-15 16:06:18.277865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.286628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.287261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.287787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.287828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.917 [2024-05-15 16:06:18.287861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.917 [2024-05-15 16:06:18.288483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.917 [2024-05-15 16:06:18.288947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.917 [2024-05-15 16:06:18.288958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.917 [2024-05-15 16:06:18.288967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.917 [2024-05-15 16:06:18.291622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.299584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.300220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.300596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.300609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.917 [2024-05-15 16:06:18.300618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.917 [2024-05-15 16:06:18.300791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.917 [2024-05-15 16:06:18.300964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.917 [2024-05-15 16:06:18.300975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.917 [2024-05-15 16:06:18.300984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.917 [2024-05-15 16:06:18.303688] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.917 [2024-05-15 16:06:18.312642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.917 [2024-05-15 16:06:18.313259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.917 [2024-05-15 16:06:18.313681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.313693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.313703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.313874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.314045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.314055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.314064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.316761] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.325672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.326170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.326572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.326613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.326645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.327051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.327236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.327248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.327257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.329944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.338725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.339306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.339772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.339812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.339844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.340266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.340439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.340450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.340459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.343148] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.351599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.352274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.352689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.352701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.352711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.352878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.353045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.353055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.353064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.355762] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.364632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.365295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.365774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.365817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.365850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.366131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.366307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.366322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.366331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.369052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.377659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.378331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.378858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.378899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.378932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.379142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.379334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.379348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.379358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.382075] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.390688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.391362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.391883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.391923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.391955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.392325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.392498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.392509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.392518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.395215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.403513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.404213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.404713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.404753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.404785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.405253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.405421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.405431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.405443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.407998] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.416193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.416843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.417285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.417327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.417359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.417955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.418211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.418221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.418230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.420724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.918 [2024-05-15 16:06:18.428958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.918 [2024-05-15 16:06:18.429607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.430053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.918 [2024-05-15 16:06:18.430065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.918 [2024-05-15 16:06:18.430074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.918 [2024-05-15 16:06:18.430247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.918 [2024-05-15 16:06:18.430414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.918 [2024-05-15 16:06:18.430424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.918 [2024-05-15 16:06:18.430433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.918 [2024-05-15 16:06:18.432985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.919 [2024-05-15 16:06:18.441622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.919 [2024-05-15 16:06:18.442279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.919 [2024-05-15 16:06:18.442544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.919 [2024-05-15 16:06:18.442583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.919 [2024-05-15 16:06:18.442615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.919 [2024-05-15 16:06:18.443227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.919 [2024-05-15 16:06:18.443454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.919 [2024-05-15 16:06:18.443465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.919 [2024-05-15 16:06:18.443473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.919 [2024-05-15 16:06:18.446089] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.919 [2024-05-15 16:06:18.454285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.919 [2024-05-15 16:06:18.454939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.919 [2024-05-15 16:06:18.455391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.919 [2024-05-15 16:06:18.455405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.919 [2024-05-15 16:06:18.455414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.919 [2024-05-15 16:06:18.455582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.919 [2024-05-15 16:06:18.455749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.919 [2024-05-15 16:06:18.455759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.919 [2024-05-15 16:06:18.455768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.919 [2024-05-15 16:06:18.458287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.919 [2024-05-15 16:06:18.466955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.919 [2024-05-15 16:06:18.467605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.919 [2024-05-15 16:06:18.468121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.919 [2024-05-15 16:06:18.468161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:19.919 [2024-05-15 16:06:18.468208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:19.919 [2024-05-15 16:06:18.468803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:19.919 [2024-05-15 16:06:18.469351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.919 [2024-05-15 16:06:18.469362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.919 [2024-05-15 16:06:18.469371] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.919 [2024-05-15 16:06:18.471928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.177 [2024-05-15 16:06:18.479976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.177 [2024-05-15 16:06:18.480648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.480857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.480899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.177 [2024-05-15 16:06:18.480932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.177 [2024-05-15 16:06:18.481451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.177 [2024-05-15 16:06:18.481620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.177 [2024-05-15 16:06:18.481630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.177 [2024-05-15 16:06:18.481639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.177 [2024-05-15 16:06:18.484346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.177 [2024-05-15 16:06:18.492710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.177 [2024-05-15 16:06:18.493135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.493694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.493737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.177 [2024-05-15 16:06:18.493770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.177 [2024-05-15 16:06:18.494381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.177 [2024-05-15 16:06:18.494571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.177 [2024-05-15 16:06:18.494581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.177 [2024-05-15 16:06:18.494590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.177 [2024-05-15 16:06:18.498133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.177 [2024-05-15 16:06:18.506147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.177 [2024-05-15 16:06:18.506801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.507321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.507362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.177 [2024-05-15 16:06:18.507395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.177 [2024-05-15 16:06:18.507990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.177 [2024-05-15 16:06:18.508270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.177 [2024-05-15 16:06:18.508280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.177 [2024-05-15 16:06:18.508289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.177 [2024-05-15 16:06:18.510782] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.177 [2024-05-15 16:06:18.518892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.177 [2024-05-15 16:06:18.519534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.519986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.519998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.177 [2024-05-15 16:06:18.520006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.177 [2024-05-15 16:06:18.520164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.177 [2024-05-15 16:06:18.520348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.177 [2024-05-15 16:06:18.520359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.177 [2024-05-15 16:06:18.520368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.177 [2024-05-15 16:06:18.522918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.177 [2024-05-15 16:06:18.531577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.177 [2024-05-15 16:06:18.532215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.532649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.177 [2024-05-15 16:06:18.532688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.177 [2024-05-15 16:06:18.532720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.533246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.533414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.533424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.533433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.535985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.544318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.544865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.545069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.545080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.545089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.545270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.545437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.545447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.545456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.548007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.557010] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.557656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.558118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.558157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.558189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.558413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.558581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.558591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.558599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.561151] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.569838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.570500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.570943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.570991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.571023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.571469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.571641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.571651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.571660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.574347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.582564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.583209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.583735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.583775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.583807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.584415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.584901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.584912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.584921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.587476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.595275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.595927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.596446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.596487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.596519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.597114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.597672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.597684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.597692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.600312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.608141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.608842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.609226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.609238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.609251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.609422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.609594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.609605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.609614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.612310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.621049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.621721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.622244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.622286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.622318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.622832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.623005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.623016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.623025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.625722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.634041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.634702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.635165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.635216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.635249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.635679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.635850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.635861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.635870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.638568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.647017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.647604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.647984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.648023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.648055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.648677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.649009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.649020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.649029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.651696] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.659859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.660538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.661057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.661097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.661129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.661628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.661796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.661806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.661815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.664369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.672564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.673223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.673723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.673763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.673795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.674103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.674285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.674296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.674305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.676860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.685363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.686003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.686393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.686436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.686468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.687002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.687253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.687268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.687281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.691045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.698656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.699334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.699851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.699891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.699923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.700535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.701023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.701033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.701042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.703602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.711412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.712054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.712450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.712494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.712527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.713007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.713168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.713178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.713187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.715801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.724254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.724789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.725214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.725255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.725287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.725880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.178 [2024-05-15 16:06:18.726233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.178 [2024-05-15 16:06:18.726247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.178 [2024-05-15 16:06:18.726256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.178 [2024-05-15 16:06:18.728806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.178 [2024-05-15 16:06:18.737098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.178 [2024-05-15 16:06:18.737791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.738338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.178 [2024-05-15 16:06:18.738383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.178 [2024-05-15 16:06:18.738432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.178 [2024-05-15 16:06:18.738899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.437 [2024-05-15 16:06:18.739093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.437 [2024-05-15 16:06:18.739109] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.437 [2024-05-15 16:06:18.739121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.437 [2024-05-15 16:06:18.741786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.437 [2024-05-15 16:06:18.749997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.437 [2024-05-15 16:06:18.750693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.751218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.751261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.437 [2024-05-15 16:06:18.751294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.437 [2024-05-15 16:06:18.751813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.437 [2024-05-15 16:06:18.751981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.437 [2024-05-15 16:06:18.751992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.437 [2024-05-15 16:06:18.752001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.437 [2024-05-15 16:06:18.754589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.437 [2024-05-15 16:06:18.762805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.437 [2024-05-15 16:06:18.763452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.763865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.763906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.437 [2024-05-15 16:06:18.763938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.437 [2024-05-15 16:06:18.764546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.437 [2024-05-15 16:06:18.765090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.437 [2024-05-15 16:06:18.765100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.437 [2024-05-15 16:06:18.765112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.437 [2024-05-15 16:06:18.767730] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.437 [2024-05-15 16:06:18.775648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.437 [2024-05-15 16:06:18.776325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.776846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.776886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.437 [2024-05-15 16:06:18.776919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.437 [2024-05-15 16:06:18.777118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.437 [2024-05-15 16:06:18.777291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.437 [2024-05-15 16:06:18.777302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.437 [2024-05-15 16:06:18.777310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.437 [2024-05-15 16:06:18.779864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.437 [2024-05-15 16:06:18.788314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.437 [2024-05-15 16:06:18.788972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.789475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.437 [2024-05-15 16:06:18.789518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.437 [2024-05-15 16:06:18.789551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.437 [2024-05-15 16:06:18.790087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.437 [2024-05-15 16:06:18.790259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.437 [2024-05-15 16:06:18.790270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.790279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.792774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.801199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.801876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.802392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.802434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.802466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.803034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.803205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.803216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.803225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.805723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.813994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.814552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.814945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.814986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.815020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.815360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.815528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.815538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.815546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.818155] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.826710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.827365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.827884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.827923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.827957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.828114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.828335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.828350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.828362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.832132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.839943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.840501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.840967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.841008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.841040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.841341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.841508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.841519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.841527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.844082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.852707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.853349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.853869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.853910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.853942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.854510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.854678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.854688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.854697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.857252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.865421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.866088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.866479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.866492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.866502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.866674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.866845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.866856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.866864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.869574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.878341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.879034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.879511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.879556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.879588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.880189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.880596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.880607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.880616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.883322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.891300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.891863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.892259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.892273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.892283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.892455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.892627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.892638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.892647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.895381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.904247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.904906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.905395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.905409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.905418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.905586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.905753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.905763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.905772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.908445] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.916950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.917614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.918133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.918174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.918222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.918659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.918826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.918837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.918845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.921405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.929803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.930448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.930831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.930842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.930855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.931021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.931188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.931204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.931213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.933813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.942477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.943129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.943661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.943702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.943735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.944344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.944765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.944775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.944785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.947364] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.955220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.955793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.956317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.956359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.956391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.956987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.957599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.957636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.957645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.960118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.967981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.968608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.969069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.969108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.969141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.969595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.969835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.969849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.969862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.973629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.981406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.981925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.982445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.982486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.982518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.983112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.983638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.983649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.983658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.986216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.438 [2024-05-15 16:06:18.994193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.438 [2024-05-15 16:06:18.994830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.995385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.438 [2024-05-15 16:06:18.995457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.438 [2024-05-15 16:06:18.995505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.438 [2024-05-15 16:06:18.995839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.438 [2024-05-15 16:06:18.996015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.438 [2024-05-15 16:06:18.996027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.438 [2024-05-15 16:06:18.996036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.438 [2024-05-15 16:06:18.998740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.698 [2024-05-15 16:06:19.006976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.698 [2024-05-15 16:06:19.007667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.698 [2024-05-15 16:06:19.007956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.698 [2024-05-15 16:06:19.007997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.698 [2024-05-15 16:06:19.008031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.698 [2024-05-15 16:06:19.008618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.698 [2024-05-15 16:06:19.008787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.698 [2024-05-15 16:06:19.008798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.698 [2024-05-15 16:06:19.008807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.698 [2024-05-15 16:06:19.011391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.698 [2024-05-15 16:06:19.019805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.698 [2024-05-15 16:06:19.020400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.698 [2024-05-15 16:06:19.020924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.698 [2024-05-15 16:06:19.020964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.020997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.021510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.021751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.021766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.021778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.025554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.699 [2024-05-15 16:06:19.033144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.699 [2024-05-15 16:06:19.033807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.034213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.034254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.034286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.034731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.034899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.034910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.034919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.037549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.699 [2024-05-15 16:06:19.046026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.699 [2024-05-15 16:06:19.046635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.047097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.047137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.047170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.047739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.047907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.047921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.047930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.050525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.699 [2024-05-15 16:06:19.058862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.699 [2024-05-15 16:06:19.059517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.060041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.060081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.060113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.060596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.060764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.060775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.060784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.063365] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.699 [2024-05-15 16:06:19.071718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.699 [2024-05-15 16:06:19.072380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.072860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.072901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.072934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.073552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.073721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.073731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.073740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.076346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.699 [2024-05-15 16:06:19.084395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.699 [2024-05-15 16:06:19.085038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.085493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.085508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.085517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.085677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.085835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.085845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.085857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.088345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.699 [2024-05-15 16:06:19.097084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.699 [2024-05-15 16:06:19.097674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.098116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.098156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.098189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.098816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.099029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.099039] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.099048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.101645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.699 [2024-05-15 16:06:19.109768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.699 [2024-05-15 16:06:19.110409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.110867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.699 [2024-05-15 16:06:19.110879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.699 [2024-05-15 16:06:19.110888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.699 [2024-05-15 16:06:19.111056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.699 [2024-05-15 16:06:19.111227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.699 [2024-05-15 16:06:19.111238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.699 [2024-05-15 16:06:19.111247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.699 [2024-05-15 16:06:19.113804] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.122546] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.123225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.123746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.123786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.123820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.123987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.124154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.124165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.124174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.126860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.135585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.136003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.136442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.136455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.136465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.136636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.136808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.136819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.136828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.139562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.148542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.149215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.149734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.149776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.149810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.150011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.150184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.150200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.150209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.152910] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.161257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.161938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.162440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.162483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.162515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.163073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.163236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.163246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.163255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.165784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.173994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.174642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.175104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.175144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.175177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.175764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.175932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.175942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.175951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.178508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.186698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.187354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.187871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.187910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.187942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.188557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.189069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.189079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.189088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.191646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.199495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.200122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.200517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.200562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.200595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.201028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.201212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.201239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.201248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.203803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.212523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.213203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.213723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.213763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.700 [2024-05-15 16:06:19.213795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.700 [2024-05-15 16:06:19.214042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.700 [2024-05-15 16:06:19.214208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.700 [2024-05-15 16:06:19.214219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.700 [2024-05-15 16:06:19.214227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.700 [2024-05-15 16:06:19.216704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.700 [2024-05-15 16:06:19.225193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.700 [2024-05-15 16:06:19.225845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.226266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.700 [2024-05-15 16:06:19.226310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.701 [2024-05-15 16:06:19.226342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.701 [2024-05-15 16:06:19.226939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.701 [2024-05-15 16:06:19.227338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.701 [2024-05-15 16:06:19.227348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.701 [2024-05-15 16:06:19.227356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.701 [2024-05-15 16:06:19.229833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.701 [2024-05-15 16:06:19.237898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.701 [2024-05-15 16:06:19.238554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.701 [2024-05-15 16:06:19.239078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.701 [2024-05-15 16:06:19.239118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.701 [2024-05-15 16:06:19.239150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.701 [2024-05-15 16:06:19.239764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.701 [2024-05-15 16:06:19.240232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.701 [2024-05-15 16:06:19.240242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.701 [2024-05-15 16:06:19.240251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.701 [2024-05-15 16:06:19.242805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.701 [2024-05-15 16:06:19.250705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.701 [2024-05-15 16:06:19.251381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.701 [2024-05-15 16:06:19.251665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.701 [2024-05-15 16:06:19.251713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.701 [2024-05-15 16:06:19.251746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.701 [2024-05-15 16:06:19.252284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.701 [2024-05-15 16:06:19.252453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.701 [2024-05-15 16:06:19.252463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.701 [2024-05-15 16:06:19.252473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.701 [2024-05-15 16:06:19.255090] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.959 [2024-05-15 16:06:19.263511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.959 [2024-05-15 16:06:19.264216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.959 [2024-05-15 16:06:19.264744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.959 [2024-05-15 16:06:19.264785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.959 [2024-05-15 16:06:19.264819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.959 [2024-05-15 16:06:19.265326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.959 [2024-05-15 16:06:19.265508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.959 [2024-05-15 16:06:19.265519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.959 [2024-05-15 16:06:19.265528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.959 [2024-05-15 16:06:19.268185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.959 [2024-05-15 16:06:19.276229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.959 [2024-05-15 16:06:19.276897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.959 [2024-05-15 16:06:19.277402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.959 [2024-05-15 16:06:19.277447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.959 [2024-05-15 16:06:19.277480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.278076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.278551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.278561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.278570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.281043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.289029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.289683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.290214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.290255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.290301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.290898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.291389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.291399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.291408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.293885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.301874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.302449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.302974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.303014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.303046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.303526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.303695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.303706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.303714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.306270] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.314613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.315276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.315796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.315836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.315868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.316240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.316408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.316419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.316429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.319005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.327595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.328182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.328576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.328589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.328599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.328773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.328945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.328956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.328965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.331664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.340579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.341199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.341611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.341624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.341634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.341805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.341978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.341989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.341997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.344701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.353618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.354256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.354602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.354614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.354624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.354796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.354968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.354979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.354988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.357719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.366641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.367278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.367684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.367697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.367707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.367879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.368053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.368064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.368073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.370770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.379672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.380352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.380814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.380827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.380837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.381009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.381180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.381196] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.381205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.383901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.392670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.393203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.393542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.960 [2024-05-15 16:06:19.393556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.960 [2024-05-15 16:06:19.393566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.960 [2024-05-15 16:06:19.393738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.960 [2024-05-15 16:06:19.393914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.960 [2024-05-15 16:06:19.393926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.960 [2024-05-15 16:06:19.393935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.960 [2024-05-15 16:06:19.396642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.960 [2024-05-15 16:06:19.405616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.960 [2024-05-15 16:06:19.406288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.406804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.406845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.406877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.407397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.407570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.407585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.407594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.410293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.418566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.419246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.419762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.419802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.419834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.420441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.420875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.420885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.420894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.423617] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.431381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.431978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.432427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.432470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.432502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.432936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.433104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.433114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.433123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.435751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.444056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.444627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.445092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.445104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.445113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.445289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.445457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.445468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.445480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.448040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.456787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.457386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.457802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.457844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.457876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.458492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.458945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.458955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.458964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.461566] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.469580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.470173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.470586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.470599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.470608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.470775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.470946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.470956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.470965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.473530] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.482273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.482881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.483390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.483443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.483453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.483618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.483785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.483795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.483804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.486375] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.494966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.495671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.496228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.496269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.496301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.496776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.497017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.497031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.497043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.500843] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.961 [2024-05-15 16:06:19.508365] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.961 [2024-05-15 16:06:19.509014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.509472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.961 [2024-05-15 16:06:19.509485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:20.961 [2024-05-15 16:06:19.509494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:20.961 [2024-05-15 16:06:19.509661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:20.961 [2024-05-15 16:06:19.509828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.961 [2024-05-15 16:06:19.509838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.961 [2024-05-15 16:06:19.509847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.961 [2024-05-15 16:06:19.512450] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.521350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.522007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.522499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.522550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.522585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.522830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.522998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.523009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.523017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.525612] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.534122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.534825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.535354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.535399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.535433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.535961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.536129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.536140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.536149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.538723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.546896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.547530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.547939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.547980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.548013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.548624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.549066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.549077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.549086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.551688] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.559702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.560299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.560802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.560843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.560875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.561484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.562021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.562032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.562040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.564608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.572562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.573245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.573657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.573698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.573730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.574167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.574340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.574351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.574360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.576914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.585397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.586048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.586524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.586567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.586600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.587151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.587398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.587413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.587426] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.591245] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.598694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.599389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.599832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.599871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.599903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.600216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.600384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.600394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.600403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.603025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.611549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.612124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.612590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.220 [2024-05-15 16:06:19.612639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.220 [2024-05-15 16:06:19.612672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.220 [2024-05-15 16:06:19.613114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.220 [2024-05-15 16:06:19.613289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.220 [2024-05-15 16:06:19.613300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.220 [2024-05-15 16:06:19.613309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.220 [2024-05-15 16:06:19.615869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.220 [2024-05-15 16:06:19.624433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.220 [2024-05-15 16:06:19.624969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.625421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.625435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.625444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.625603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.625762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.625772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.625781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.628342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.637252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.637935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.638434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.638475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.638508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.638995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.639163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.639174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.639182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.641879] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.650133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.650809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.651259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.651282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.651296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.651472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.651650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.651662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.651671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.654410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.663184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.663843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.664257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.664275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.664285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.664462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.664638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.664653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.664664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.667345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.676137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.676728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.677219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.677261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.677294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.677571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.677739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.677749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.677758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.680385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.688865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.689548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.690002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.690047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.690056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.690231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.690399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.690409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.690418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.692980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.701617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.702233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.702689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.702730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.702762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.703245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.703414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.703424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.703433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.705996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.714331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.714978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.715422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.715464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.715496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.716092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.716491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.716502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.221 [2024-05-15 16:06:19.716511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.221 [2024-05-15 16:06:19.719234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.221 [2024-05-15 16:06:19.727110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.221 [2024-05-15 16:06:19.727701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.728235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.221 [2024-05-15 16:06:19.728277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.221 [2024-05-15 16:06:19.728309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.221 [2024-05-15 16:06:19.728904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.221 [2024-05-15 16:06:19.729292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.221 [2024-05-15 16:06:19.729307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.222 [2024-05-15 16:06:19.729320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.222 [2024-05-15 16:06:19.733111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.222 [2024-05-15 16:06:19.740744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.222 [2024-05-15 16:06:19.741430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.741896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.741937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.222 [2024-05-15 16:06:19.741970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.222 [2024-05-15 16:06:19.742468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.222 [2024-05-15 16:06:19.742636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.222 [2024-05-15 16:06:19.742647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.222 [2024-05-15 16:06:19.742656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.222 [2024-05-15 16:06:19.745314] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.222 [2024-05-15 16:06:19.753494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.222 [2024-05-15 16:06:19.754030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.754505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.754548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.222 [2024-05-15 16:06:19.754580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.222 [2024-05-15 16:06:19.755177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.222 [2024-05-15 16:06:19.755612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.222 [2024-05-15 16:06:19.755623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.222 [2024-05-15 16:06:19.755632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.222 [2024-05-15 16:06:19.758205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.222 [2024-05-15 16:06:19.766232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.222 [2024-05-15 16:06:19.766884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.767421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.767434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.222 [2024-05-15 16:06:19.767443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.222 [2024-05-15 16:06:19.767611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.222 [2024-05-15 16:06:19.767779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.222 [2024-05-15 16:06:19.767789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.222 [2024-05-15 16:06:19.767801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.222 [2024-05-15 16:06:19.770374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.222 [2024-05-15 16:06:19.779149] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.222 [2024-05-15 16:06:19.779760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.780237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.222 [2024-05-15 16:06:19.780258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.222 [2024-05-15 16:06:19.780270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.222 [2024-05-15 16:06:19.780448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.222 [2024-05-15 16:06:19.780618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.222 [2024-05-15 16:06:19.780631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.222 [2024-05-15 16:06:19.780642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.481 [2024-05-15 16:06:19.783348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.481 [2024-05-15 16:06:19.791916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.481 [2024-05-15 16:06:19.792540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.481 [2024-05-15 16:06:19.792941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.481 [2024-05-15 16:06:19.792982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.481 [2024-05-15 16:06:19.793016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.481 [2024-05-15 16:06:19.793403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.481 [2024-05-15 16:06:19.793572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.481 [2024-05-15 16:06:19.793582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.481 [2024-05-15 16:06:19.793591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.481 [2024-05-15 16:06:19.796158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.481 [2024-05-15 16:06:19.804721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.805386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.805853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.805893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.805926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.806343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.806511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.806522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.806531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.809082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.817548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.818211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.818628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.818669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.818679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.818846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.819013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.819023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.819032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.821599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.830274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.830933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.831449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.831490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.831524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.832120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.832640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.832651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.832660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.835227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.842961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.843664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.844237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.844280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.844312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.844624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.844791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.844802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.844810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.847463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.855726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.856423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.856933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.856972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.857005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.857616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.857849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.857859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.857868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.860434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.868458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.869127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.869611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.869652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.869685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.870005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.870173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.870184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.870200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.872810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.881204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.881817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.882265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.882308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.882342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.882559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.882718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.882728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.882736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.885287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.893928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.894623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.895011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.895024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.895034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.895210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.895381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.895392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.895401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.898079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.906882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.907523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.907904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.907917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.907927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.482 [2024-05-15 16:06:19.908099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.482 [2024-05-15 16:06:19.908280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.482 [2024-05-15 16:06:19.908292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.482 [2024-05-15 16:06:19.908301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.482 [2024-05-15 16:06:19.911008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.482 [2024-05-15 16:06:19.919828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.482 [2024-05-15 16:06:19.920511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.920937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.482 [2024-05-15 16:06:19.920997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.482 [2024-05-15 16:06:19.921031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:19.921564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:19.921804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:19.921819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:19.921831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:19.925614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:19.933294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:19.933801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.934235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.934299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:19.934333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:19.934836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:19.935005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:19.935016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:19.935025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:19.937579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:19.946055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:19.946599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.947045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.947085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:19.947119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:19.947292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:19.947461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:19.947472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:19.947480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:19.950044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:19.958788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:19.959442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.959973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.960015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:19.960048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:19.960663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:19.961001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:19.961012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:19.961021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:19.963584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:19.971618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:19.972258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.972804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.972844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:19.972885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:19.973489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:19.974072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:19.974083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:19.974091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:19.976650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:19.984418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:19.984985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.985509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.985552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:19.985584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:19.986065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:19.986236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:19.986247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:19.986256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:19.988809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:19.997146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:19.997759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.998279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:19.998321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:19.998355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:19.998950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:19.999334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:19.999345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:19.999354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:20.002023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:20.010148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:20.010776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:20.011201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:20.011215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:20.011225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:20.011411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:20.011593] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:20.011605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:20.011614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:20.014489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:20.023093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:20.023792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:20.024246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:20.024289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:20.024322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:20.024902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:20.025071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:20.025082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.483 [2024-05-15 16:06:20.025090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.483 [2024-05-15 16:06:20.027805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.483 [2024-05-15 16:06:20.035924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.483 [2024-05-15 16:06:20.036522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:20.036730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.483 [2024-05-15 16:06:20.036769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.483 [2024-05-15 16:06:20.036801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.483 [2024-05-15 16:06:20.037339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.483 [2024-05-15 16:06:20.037507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.483 [2024-05-15 16:06:20.037518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.484 [2024-05-15 16:06:20.037526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.484 [2024-05-15 16:06:20.040241] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.743 [2024-05-15 16:06:20.048894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.743 [2024-05-15 16:06:20.049586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.049915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.049931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.743 [2024-05-15 16:06:20.049943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.743 [2024-05-15 16:06:20.050133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.743 [2024-05-15 16:06:20.050336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.743 [2024-05-15 16:06:20.050348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.743 [2024-05-15 16:06:20.050357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.743 [2024-05-15 16:06:20.053087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.743 [2024-05-15 16:06:20.061861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.743 [2024-05-15 16:06:20.062501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.062937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.062950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.743 [2024-05-15 16:06:20.062960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.743 [2024-05-15 16:06:20.063132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.743 [2024-05-15 16:06:20.063310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.743 [2024-05-15 16:06:20.063322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.743 [2024-05-15 16:06:20.063331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.743 [2024-05-15 16:06:20.066025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.743 [2024-05-15 16:06:20.074820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.743 [2024-05-15 16:06:20.075233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.075695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.075707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.743 [2024-05-15 16:06:20.075717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.743 [2024-05-15 16:06:20.075888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.743 [2024-05-15 16:06:20.076066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.743 [2024-05-15 16:06:20.076077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.743 [2024-05-15 16:06:20.076085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.743 [2024-05-15 16:06:20.078821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.743 [2024-05-15 16:06:20.087986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.743 [2024-05-15 16:06:20.088630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.088824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.088837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.743 [2024-05-15 16:06:20.088846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.743 [2024-05-15 16:06:20.089018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.743 [2024-05-15 16:06:20.089195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.743 [2024-05-15 16:06:20.089209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.743 [2024-05-15 16:06:20.089218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.743 [2024-05-15 16:06:20.091916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.743 [2024-05-15 16:06:20.100947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.743 [2024-05-15 16:06:20.101622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.102023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.743 [2024-05-15 16:06:20.102035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.743 [2024-05-15 16:06:20.102045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.743 [2024-05-15 16:06:20.102222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.743 [2024-05-15 16:06:20.102394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.743 [2024-05-15 16:06:20.102405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.743 [2024-05-15 16:06:20.102414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.105110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.113851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.114487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.114903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.114915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.114924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.115096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.115273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.115285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.115294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.117982] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.126790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.127420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.127894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.127906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.127915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.128086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.128262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.128273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.128285] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.131010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.139737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.140398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.140947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.140986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.141017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.141393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.141566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.141577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.141586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.144252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.152739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.153417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.153880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.153893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.153903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.154074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.154253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.154264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.154273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.156964] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.165717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.166384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.166882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.166922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.166954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.167528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.167701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.167712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.167721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.170417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.178679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.179335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.179742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.179782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.179815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.180420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.180759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.180770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.180779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.183485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.191708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.192396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.192841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.192880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.192912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.193518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.193765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.193776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.193784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.196463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.204717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.205393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.205929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.205969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.206001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.206435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.206603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.744 [2024-05-15 16:06:20.206614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.744 [2024-05-15 16:06:20.206623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.744 [2024-05-15 16:06:20.209232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.744 [2024-05-15 16:06:20.217617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.744 [2024-05-15 16:06:20.218300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.218815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.744 [2024-05-15 16:06:20.218855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.744 [2024-05-15 16:06:20.218888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.744 [2024-05-15 16:06:20.219494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.744 [2024-05-15 16:06:20.219855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.745 [2024-05-15 16:06:20.219866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.745 [2024-05-15 16:06:20.219875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.745 [2024-05-15 16:06:20.222556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.745 [2024-05-15 16:06:20.230560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.745 [2024-05-15 16:06:20.231232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.231732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.231772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.745 [2024-05-15 16:06:20.231804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.745 [2024-05-15 16:06:20.232407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.745 [2024-05-15 16:06:20.232858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.745 [2024-05-15 16:06:20.232868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.745 [2024-05-15 16:06:20.232877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.745 [2024-05-15 16:06:20.235556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.745 [2024-05-15 16:06:20.243553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.745 [2024-05-15 16:06:20.244211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.244665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.244678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.745 [2024-05-15 16:06:20.244687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.745 [2024-05-15 16:06:20.244858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.745 [2024-05-15 16:06:20.245030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.745 [2024-05-15 16:06:20.245041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.745 [2024-05-15 16:06:20.245050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.745 [2024-05-15 16:06:20.247716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.745 [2024-05-15 16:06:20.256477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.745 [2024-05-15 16:06:20.257062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.257523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.257536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.745 [2024-05-15 16:06:20.257545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.745 [2024-05-15 16:06:20.257716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.745 [2024-05-15 16:06:20.257887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.745 [2024-05-15 16:06:20.257898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.745 [2024-05-15 16:06:20.257907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.745 [2024-05-15 16:06:20.260585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.745 [2024-05-15 16:06:20.269327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.745 [2024-05-15 16:06:20.269935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.270376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.270418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.745 [2024-05-15 16:06:20.270450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.745 [2024-05-15 16:06:20.270806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.745 [2024-05-15 16:06:20.270974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.745 [2024-05-15 16:06:20.270984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.745 [2024-05-15 16:06:20.270993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.745 [2024-05-15 16:06:20.273653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.745 [2024-05-15 16:06:20.282173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.745 [2024-05-15 16:06:20.282843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.283241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.283284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.745 [2024-05-15 16:06:20.283316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.745 [2024-05-15 16:06:20.283799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.745 [2024-05-15 16:06:20.283967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.745 [2024-05-15 16:06:20.283977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.745 [2024-05-15 16:06:20.283985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.745 [2024-05-15 16:06:20.286660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.745 [2024-05-15 16:06:20.295162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.745 [2024-05-15 16:06:20.295783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.296314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.745 [2024-05-15 16:06:20.296366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:21.745 [2024-05-15 16:06:20.296398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:21.745 [2024-05-15 16:06:20.296842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:21.745 [2024-05-15 16:06:20.297015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.745 [2024-05-15 16:06:20.297026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.745 [2024-05-15 16:06:20.297035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.745 [2024-05-15 16:06:20.299697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.005 [2024-05-15 16:06:20.308118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.005 [2024-05-15 16:06:20.308790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.309358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.309426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.005 [2024-05-15 16:06:20.309467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.005 [2024-05-15 16:06:20.310022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.005 [2024-05-15 16:06:20.310213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.005 [2024-05-15 16:06:20.310229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.005 [2024-05-15 16:06:20.310244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.005 [2024-05-15 16:06:20.313017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.005 [2024-05-15 16:06:20.321160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.005 [2024-05-15 16:06:20.321838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.322342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.322385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.005 [2024-05-15 16:06:20.322418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.005 [2024-05-15 16:06:20.322646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.005 [2024-05-15 16:06:20.322833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.005 [2024-05-15 16:06:20.322844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.005 [2024-05-15 16:06:20.322853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.005 [2024-05-15 16:06:20.325518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.005 [2024-05-15 16:06:20.334104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.005 [2024-05-15 16:06:20.334778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.335210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.335252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.005 [2024-05-15 16:06:20.335293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.005 [2024-05-15 16:06:20.335789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.005 [2024-05-15 16:06:20.335962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.005 [2024-05-15 16:06:20.335973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.005 [2024-05-15 16:06:20.335992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.005 [2024-05-15 16:06:20.338681] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.005 [2024-05-15 16:06:20.347054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.005 [2024-05-15 16:06:20.347696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.348209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.348251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.005 [2024-05-15 16:06:20.348283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.005 [2024-05-15 16:06:20.348711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.005 [2024-05-15 16:06:20.348884] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.005 [2024-05-15 16:06:20.348895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.005 [2024-05-15 16:06:20.348904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.005 [2024-05-15 16:06:20.351582] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.005 [2024-05-15 16:06:20.359955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.005 [2024-05-15 16:06:20.360614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.361127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.005 [2024-05-15 16:06:20.361168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.005 [2024-05-15 16:06:20.361211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.005 [2024-05-15 16:06:20.361808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.005 [2024-05-15 16:06:20.362184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.362199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.362208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.364869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.372936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.373593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.374050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.374063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.374072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.374252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.374424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.374435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.374444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.377137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.386031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.386702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.387205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.387219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.387229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.387401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.387573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.387584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.387593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.390262] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.398960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.399562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.400003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.400044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.400075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.400686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.401202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.401213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.401222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.403914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.411871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.412455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.412971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.412983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.412993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.413164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.413343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.413354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.413363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.416070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.424857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.425541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.425824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.425838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.425850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.426026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.426209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.426221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.426230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.428922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.437822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.438448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.438932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.438972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.439009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.439180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.439364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.439376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.439385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.442074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.450825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.451503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.451954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.451994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.452026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.452285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.452468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.452478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.452490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.455122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.463698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.464123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.464594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.464639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.006 [2024-05-15 16:06:20.464671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.006 [2024-05-15 16:06:20.465104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.006 [2024-05-15 16:06:20.465294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.006 [2024-05-15 16:06:20.465305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.006 [2024-05-15 16:06:20.465314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.006 [2024-05-15 16:06:20.467954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.006 [2024-05-15 16:06:20.476534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.006 [2024-05-15 16:06:20.477217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.477740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.006 [2024-05-15 16:06:20.477781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.007 [2024-05-15 16:06:20.477813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.007 [2024-05-15 16:06:20.478361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.007 [2024-05-15 16:06:20.478603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.007 [2024-05-15 16:06:20.478617] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.007 [2024-05-15 16:06:20.478629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.007 [2024-05-15 16:06:20.482410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.007 [2024-05-15 16:06:20.489912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.007 [2024-05-15 16:06:20.490557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.491078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.491119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.007 [2024-05-15 16:06:20.491151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.007 [2024-05-15 16:06:20.491768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.007 [2024-05-15 16:06:20.492198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.007 [2024-05-15 16:06:20.492209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.007 [2024-05-15 16:06:20.492218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.007 [2024-05-15 16:06:20.494874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.007 [2024-05-15 16:06:20.502812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.007 [2024-05-15 16:06:20.503520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.504043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.504084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.007 [2024-05-15 16:06:20.504117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.007 [2024-05-15 16:06:20.504735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.007 [2024-05-15 16:06:20.505345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.007 [2024-05-15 16:06:20.505356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.007 [2024-05-15 16:06:20.505365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.007 [2024-05-15 16:06:20.508015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.007 [2024-05-15 16:06:20.515783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.007 [2024-05-15 16:06:20.516437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.516956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.516996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.007 [2024-05-15 16:06:20.517028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.007 [2024-05-15 16:06:20.517489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.007 [2024-05-15 16:06:20.517662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.007 [2024-05-15 16:06:20.517672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.007 [2024-05-15 16:06:20.517681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.007 [2024-05-15 16:06:20.520337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.007 [2024-05-15 16:06:20.528610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.007 [2024-05-15 16:06:20.529255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.529700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.529740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.007 [2024-05-15 16:06:20.529773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.007 [2024-05-15 16:06:20.530238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.007 [2024-05-15 16:06:20.530413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.007 [2024-05-15 16:06:20.530423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.007 [2024-05-15 16:06:20.530432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.007 [2024-05-15 16:06:20.533074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.007 [2024-05-15 16:06:20.541506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.007 [2024-05-15 16:06:20.542170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.542696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.542737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.007 [2024-05-15 16:06:20.542769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.007 [2024-05-15 16:06:20.543204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.007 [2024-05-15 16:06:20.543393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.007 [2024-05-15 16:06:20.543404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.007 [2024-05-15 16:06:20.543413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.007 [2024-05-15 16:06:20.546070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.007 [2024-05-15 16:06:20.554412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.007 [2024-05-15 16:06:20.555084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.555584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.007 [2024-05-15 16:06:20.555627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.007 [2024-05-15 16:06:20.555667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.007 [2024-05-15 16:06:20.555833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.007 [2024-05-15 16:06:20.556000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.007 [2024-05-15 16:06:20.556010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.007 [2024-05-15 16:06:20.556019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.007 [2024-05-15 16:06:20.558684] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.268 [2024-05-15 16:06:20.567517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.268 [2024-05-15 16:06:20.568117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.268 [2024-05-15 16:06:20.568589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.268 [2024-05-15 16:06:20.568603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.268 [2024-05-15 16:06:20.568614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.268 [2024-05-15 16:06:20.568789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.268 [2024-05-15 16:06:20.568961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.268 [2024-05-15 16:06:20.568971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.268 [2024-05-15 16:06:20.568980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.268 [2024-05-15 16:06:20.571640] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.268 [2024-05-15 16:06:20.580389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.268 [2024-05-15 16:06:20.581075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.268 [2024-05-15 16:06:20.581602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.268 [2024-05-15 16:06:20.581646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.268 [2024-05-15 16:06:20.581680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.268 [2024-05-15 16:06:20.582298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.268 [2024-05-15 16:06:20.582760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.268 [2024-05-15 16:06:20.582771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.268 [2024-05-15 16:06:20.582780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.268 [2024-05-15 16:06:20.585511] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3916582 Killed "${NVMF_APP[@]}" "$@" 00:28:22.268 16:06:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:22.268 16:06:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:22.268 [2024-05-15 16:06:20.593299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.269 [2024-05-15 16:06:20.593889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:22.269 [2024-05-15 16:06:20.594282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.594296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.269 [2024-05-15 16:06:20.594305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.269 [2024-05-15 16:06:20.594472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.269 [2024-05-15 16:06:20.594657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.269 [2024-05-15 16:06:20.594667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.269 [2024-05-15 16:06:20.594677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.269 [2024-05-15 16:06:20.597384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3917961 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3917961 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3917961 ']' 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.269 16:06:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:22.269 [2024-05-15 16:06:20.606317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.269 [2024-05-15 16:06:20.606905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.607362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.607375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.269 [2024-05-15 16:06:20.607386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.269 [2024-05-15 16:06:20.607558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.269 [2024-05-15 16:06:20.607730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.269 [2024-05-15 16:06:20.607741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.269 [2024-05-15 16:06:20.607751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.269 [2024-05-15 16:06:20.610452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.269 [2024-05-15 16:06:20.619380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.269 [2024-05-15 16:06:20.620036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.620522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.620537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.269 [2024-05-15 16:06:20.620548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.269 [2024-05-15 16:06:20.620722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.269 [2024-05-15 16:06:20.620893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.269 [2024-05-15 16:06:20.620904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.269 [2024-05-15 16:06:20.620913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.269 [2024-05-15 16:06:20.623614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.269 [2024-05-15 16:06:20.632361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.269 [2024-05-15 16:06:20.632996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.633353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.633396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.269 [2024-05-15 16:06:20.633428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.269 [2024-05-15 16:06:20.634024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.269 [2024-05-15 16:06:20.634634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.269 [2024-05-15 16:06:20.634674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.269 [2024-05-15 16:06:20.634683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.269 [2024-05-15 16:06:20.637319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.269 [2024-05-15 16:06:20.645221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.269 [2024-05-15 16:06:20.645905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.646361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.646402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.269 [2024-05-15 16:06:20.646434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.269 [2024-05-15 16:06:20.646630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.269 [2024-05-15 16:06:20.646801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.269 [2024-05-15 16:06:20.646812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.269 [2024-05-15 16:06:20.646821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.269 [2024-05-15 16:06:20.649037] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:22.269 [2024-05-15 16:06:20.649086] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.269 [2024-05-15 16:06:20.649508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.269 [2024-05-15 16:06:20.658266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.269 [2024-05-15 16:06:20.658844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.659237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.659250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.269 [2024-05-15 16:06:20.659260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.269 [2024-05-15 16:06:20.659432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.269 [2024-05-15 16:06:20.659604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.269 [2024-05-15 16:06:20.659615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.269 [2024-05-15 16:06:20.659624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.269 [2024-05-15 16:06:20.662334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.269 [2024-05-15 16:06:20.671265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.269 [2024-05-15 16:06:20.671911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.672430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.269 [2024-05-15 16:06:20.672471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.269 [2024-05-15 16:06:20.672503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.269 [2024-05-15 16:06:20.672756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.269 [2024-05-15 16:06:20.672929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.269 [2024-05-15 16:06:20.672941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.269 [2024-05-15 16:06:20.672950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.269 [2024-05-15 16:06:20.675653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.684299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.684970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.685415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.685457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.685489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.685660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.685833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.685844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.685853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.270 [2024-05-15 16:06:20.688546] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.697298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.697698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.698133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.698145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.698155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.698332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.698505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.698515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.698524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 [2024-05-15 16:06:20.701238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.710262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.710895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.711284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.711298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.711309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.711481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.711654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.711664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.711673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 [2024-05-15 16:06:20.714384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.723301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.723959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.724347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.724360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.724369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.724541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.724715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.724726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.724735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 [2024-05-15 16:06:20.724973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:22.270 [2024-05-15 16:06:20.727431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.736198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.736838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.737293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.737307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.737317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.737488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.737660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.737671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.737681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 [2024-05-15 16:06:20.740388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.749124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.749782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.750242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.750255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.750266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.750439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.750611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.750622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.750631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 [2024-05-15 16:06:20.753332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.762118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.762594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.763003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.763016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.763026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.763204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.763376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.763387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.763397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 [2024-05-15 16:06:20.766080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.270 [2024-05-15 16:06:20.775038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.270 [2024-05-15 16:06:20.775682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.776142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.270 [2024-05-15 16:06:20.776154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.270 [2024-05-15 16:06:20.776164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.270 [2024-05-15 16:06:20.776341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.270 [2024-05-15 16:06:20.776512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.270 [2024-05-15 16:06:20.776523] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.270 [2024-05-15 16:06:20.776532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.270 [2024-05-15 16:06:20.779215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.271 [2024-05-15 16:06:20.788010] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.271 [2024-05-15 16:06:20.788685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.789071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.789083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.271 [2024-05-15 16:06:20.789093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.271 [2024-05-15 16:06:20.789270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.271 [2024-05-15 16:06:20.789441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.271 [2024-05-15 16:06:20.789452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.271 [2024-05-15 16:06:20.789461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.271 [2024-05-15 16:06:20.792152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.271 [2024-05-15 16:06:20.796450] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.271 [2024-05-15 16:06:20.796480] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.271 [2024-05-15 16:06:20.796493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.271 [2024-05-15 16:06:20.796502] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.271 [2024-05-15 16:06:20.796509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.271 [2024-05-15 16:06:20.796557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.271 [2024-05-15 16:06:20.796660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.271 [2024-05-15 16:06:20.796662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.271 [2024-05-15 16:06:20.801049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.271 [2024-05-15 16:06:20.801726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.802165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.802177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.271 [2024-05-15 16:06:20.802188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.271 [2024-05-15 16:06:20.802366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.271 [2024-05-15 16:06:20.802539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.271 [2024-05-15 16:06:20.802549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.271 [2024-05-15 16:06:20.802559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.271 [2024-05-15 16:06:20.805253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.271 [2024-05-15 16:06:20.813990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.271 [2024-05-15 16:06:20.814828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.815241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.815254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.271 [2024-05-15 16:06:20.815266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.271 [2024-05-15 16:06:20.815441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.271 [2024-05-15 16:06:20.815615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.271 [2024-05-15 16:06:20.815626] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.271 [2024-05-15 16:06:20.815636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.271 [2024-05-15 16:06:20.818334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.271 [2024-05-15 16:06:20.827003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.271 [2024-05-15 16:06:20.827690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.828143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.271 [2024-05-15 16:06:20.828164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.271 [2024-05-15 16:06:20.828178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.271 [2024-05-15 16:06:20.828370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.271 [2024-05-15 16:06:20.828546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.271 [2024-05-15 16:06:20.828565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.271 [2024-05-15 16:06:20.828577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.531 [2024-05-15 16:06:20.831335] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.531 [2024-05-15 16:06:20.840028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.531 [2024-05-15 16:06:20.840573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.531 [2024-05-15 16:06:20.841035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.531 [2024-05-15 16:06:20.841049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.531 [2024-05-15 16:06:20.841059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.531 [2024-05-15 16:06:20.841240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.531 [2024-05-15 16:06:20.841413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.531 [2024-05-15 16:06:20.841424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.531 [2024-05-15 16:06:20.841433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.531 [2024-05-15 16:06:20.844132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.531 [2024-05-15 16:06:20.853057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.531 [2024-05-15 16:06:20.853732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.531 [2024-05-15 16:06:20.854130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.531 [2024-05-15 16:06:20.854142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.531 [2024-05-15 16:06:20.854154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.531 [2024-05-15 16:06:20.854332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.531 [2024-05-15 16:06:20.854504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.531 [2024-05-15 16:06:20.854515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.531 [2024-05-15 16:06:20.854525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.531 [2024-05-15 16:06:20.857231] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.531 [2024-05-15 16:06:20.865978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.531 [2024-05-15 16:06:20.866636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.531 [2024-05-15 16:06:20.867074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.531 [2024-05-15 16:06:20.867087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.531 [2024-05-15 16:06:20.867097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.531 [2024-05-15 16:06:20.867276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.531 [2024-05-15 16:06:20.867454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.531 [2024-05-15 16:06:20.867465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.531 [2024-05-15 16:06:20.867479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.531 [2024-05-15 16:06:20.870168] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.531 [2024-05-15 16:06:20.878916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.531 [2024-05-15 16:06:20.879550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.531 [2024-05-15 16:06:20.880007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.880019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.532 [2024-05-15 16:06:20.880029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.532 [2024-05-15 16:06:20.880208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.532 [2024-05-15 16:06:20.880381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.532 [2024-05-15 16:06:20.880392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.532 [2024-05-15 16:06:20.880401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.532 [2024-05-15 16:06:20.883089] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.532 [2024-05-15 16:06:20.891837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.532 [2024-05-15 16:06:20.892498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.892878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.892891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.532 [2024-05-15 16:06:20.892900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.532 [2024-05-15 16:06:20.893073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.532 [2024-05-15 16:06:20.893249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.532 [2024-05-15 16:06:20.893260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.532 [2024-05-15 16:06:20.893269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.532 [2024-05-15 16:06:20.895958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.532 [2024-05-15 16:06:20.904870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.532 [2024-05-15 16:06:20.905537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.905995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.906008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.532 [2024-05-15 16:06:20.906017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.532 [2024-05-15 16:06:20.906189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.532 [2024-05-15 16:06:20.906365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.532 [2024-05-15 16:06:20.906376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.532 [2024-05-15 16:06:20.906385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.532 [2024-05-15 16:06:20.909083] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.532 [2024-05-15 16:06:20.917833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.532 [2024-05-15 16:06:20.918492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.918955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.918967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.532 [2024-05-15 16:06:20.918977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.532 [2024-05-15 16:06:20.919150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.532 [2024-05-15 16:06:20.919329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.532 [2024-05-15 16:06:20.919340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.532 [2024-05-15 16:06:20.919349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.532 [2024-05-15 16:06:20.922063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.532 [2024-05-15 16:06:20.930883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.532 [2024-05-15 16:06:20.931557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.931947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.931960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.532 [2024-05-15 16:06:20.931970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.532 [2024-05-15 16:06:20.932147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.532 [2024-05-15 16:06:20.932323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.532 [2024-05-15 16:06:20.932334] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.532 [2024-05-15 16:06:20.932343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.532 [2024-05-15 16:06:20.935035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.532 [2024-05-15 16:06:20.943803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.532 [2024-05-15 16:06:20.944463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.944923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.944936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.532 [2024-05-15 16:06:20.944946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.532 [2024-05-15 16:06:20.945118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.532 [2024-05-15 16:06:20.945293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.532 [2024-05-15 16:06:20.945304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.532 [2024-05-15 16:06:20.945313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.532 [2024-05-15 16:06:20.948012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.532 [2024-05-15 16:06:20.956774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.532 [2024-05-15 16:06:20.957430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.957775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.532 [2024-05-15 16:06:20.957788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.532 [2024-05-15 16:06:20.957797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.532 [2024-05-15 16:06:20.957969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.532 [2024-05-15 16:06:20.958141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.532 [2024-05-15 16:06:20.958152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.533 [2024-05-15 16:06:20.958161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.533 [2024-05-15 16:06:20.960864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.533 [2024-05-15 16:06:20.969770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.533 [2024-05-15 16:06:20.970335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:20.970714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:20.970726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.533 [2024-05-15 16:06:20.970736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.533 [2024-05-15 16:06:20.970907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.533 [2024-05-15 16:06:20.971079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.533 [2024-05-15 16:06:20.971089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.533 [2024-05-15 16:06:20.971098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.533 [2024-05-15 16:06:20.973801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.533 [2024-05-15 16:06:20.982709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.533 [2024-05-15 16:06:20.983213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:20.983650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:20.983662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.533 [2024-05-15 16:06:20.983671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.533 [2024-05-15 16:06:20.983843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.533 [2024-05-15 16:06:20.984014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.533 [2024-05-15 16:06:20.984025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.533 [2024-05-15 16:06:20.984033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.533 [2024-05-15 16:06:20.986736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.533 [2024-05-15 16:06:20.995646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.533 [2024-05-15 16:06:20.996212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:20.996611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:20.996624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.533 [2024-05-15 16:06:20.996634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.533 [2024-05-15 16:06:20.996806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.533 [2024-05-15 16:06:20.996977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.533 [2024-05-15 16:06:20.996988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.533 [2024-05-15 16:06:20.996997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.533 [2024-05-15 16:06:20.999694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.533 [2024-05-15 16:06:21.008614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.533 [2024-05-15 16:06:21.009271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:21.009654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:21.009667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.533 [2024-05-15 16:06:21.009676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.533 [2024-05-15 16:06:21.009848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.533 [2024-05-15 16:06:21.010019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.533 [2024-05-15 16:06:21.010029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.533 [2024-05-15 16:06:21.010038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.533 [2024-05-15 16:06:21.012739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.533 [2024-05-15 16:06:21.021655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.533 [2024-05-15 16:06:21.022222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:21.022657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:21.022670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.533 [2024-05-15 16:06:21.022679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.533 [2024-05-15 16:06:21.022852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.533 [2024-05-15 16:06:21.023024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.533 [2024-05-15 16:06:21.023035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.533 [2024-05-15 16:06:21.023044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.533 [2024-05-15 16:06:21.025750] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.533 [2024-05-15 16:06:21.034676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.533 [2024-05-15 16:06:21.035218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:21.035652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.533 [2024-05-15 16:06:21.035668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.533 [2024-05-15 16:06:21.035678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.533 [2024-05-15 16:06:21.035850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.533 [2024-05-15 16:06:21.036022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.534 [2024-05-15 16:06:21.036033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.534 [2024-05-15 16:06:21.036042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.534 [2024-05-15 16:06:21.038744] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.534 [2024-05-15 16:06:21.047699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.534 [2024-05-15 16:06:21.048276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.048603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.048616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.534 [2024-05-15 16:06:21.048625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.534 [2024-05-15 16:06:21.048797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.534 [2024-05-15 16:06:21.048970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.534 [2024-05-15 16:06:21.048981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.534 [2024-05-15 16:06:21.048990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.534 [2024-05-15 16:06:21.051696] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.534 [2024-05-15 16:06:21.060629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.534 [2024-05-15 16:06:21.061215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.061589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.061602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.534 [2024-05-15 16:06:21.061611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.534 [2024-05-15 16:06:21.061782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.534 [2024-05-15 16:06:21.061955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.534 [2024-05-15 16:06:21.061965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.534 [2024-05-15 16:06:21.061975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.534 [2024-05-15 16:06:21.064674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.534 [2024-05-15 16:06:21.073587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.534 [2024-05-15 16:06:21.074094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.074488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.074501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.534 [2024-05-15 16:06:21.074514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.534 [2024-05-15 16:06:21.074686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.534 [2024-05-15 16:06:21.074858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.534 [2024-05-15 16:06:21.074868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.534 [2024-05-15 16:06:21.074877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.534 [2024-05-15 16:06:21.077573] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.534 [2024-05-15 16:06:21.086494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.534 [2024-05-15 16:06:21.087082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.087483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.534 [2024-05-15 16:06:21.087496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.534 [2024-05-15 16:06:21.087506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.534 [2024-05-15 16:06:21.087677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.534 [2024-05-15 16:06:21.087850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.534 [2024-05-15 16:06:21.087860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.534 [2024-05-15 16:06:21.087869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.534 [2024-05-15 16:06:21.090644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.099541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.100203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.100603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.100617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.100628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.100806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.100978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.100989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.100998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.103697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.112459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.112978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.113389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.113402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.113412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.113588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.113761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.113772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.113780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.116483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.125403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.126065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.126454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.126469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.126479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.126653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.126826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.126837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.126846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.129550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.138310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.138897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.139237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.139250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.139259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.139431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.139603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.139613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.139622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.142338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.151243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.151665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.152004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.152016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.152026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.152205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.152384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.152395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.152404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.155095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.164234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.164591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.164981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.164993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.165003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.165175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.165358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.165370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.165379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.168087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.177196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.177715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.178049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.178061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.178071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.178247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.178419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.178430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.178439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.181130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.190211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.190612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.190944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.190956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.190966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.191137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.191314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.191328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.191337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.194032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.203254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.203859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.204080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.204092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.204102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.204276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.204448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.204459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.204468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.207159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.794 [2024-05-15 16:06:21.216221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.794 [2024-05-15 16:06:21.216724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.217177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.794 [2024-05-15 16:06:21.217194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.794 [2024-05-15 16:06:21.217204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.794 [2024-05-15 16:06:21.217374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.794 [2024-05-15 16:06:21.217547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.794 [2024-05-15 16:06:21.217557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.794 [2024-05-15 16:06:21.217566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.794 [2024-05-15 16:06:21.220260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.229157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.229679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.230115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.230128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.230137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.230313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.230485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.230495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.230507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.233201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.242104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.242763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.243219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.243232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.243242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.243413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.243586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.243597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.243606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.246301] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.255040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.255600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.255990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.256002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.256012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.256184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.256360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.256371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.256380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.259069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.267960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.268569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.269012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.269025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.269034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.269209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.269380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.269391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.269400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.272087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.280987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.281600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.281924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.281937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.281948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.282119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.282297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.282308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.282317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.285004] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.293898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.294473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.294834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.294847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.294856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.295027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.295203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.295214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.295223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.297915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.306822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.307382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.307822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.307835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.307844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.308016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.308188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.308204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.308214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.310904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.319800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.320458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.320835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.320848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.320858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.321030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.321208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.321219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.321228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.323919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.332813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.333375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.333709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.333722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.333732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.333905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.334077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.334088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.334097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.336788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.795 [2024-05-15 16:06:21.345856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.795 [2024-05-15 16:06:21.346451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.346769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.795 [2024-05-15 16:06:21.346782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:22.795 [2024-05-15 16:06:21.346792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:22.795 [2024-05-15 16:06:21.346963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:22.795 [2024-05-15 16:06:21.347135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.795 [2024-05-15 16:06:21.347146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.795 [2024-05-15 16:06:21.347155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.795 [2024-05-15 16:06:21.349847] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.056 [2024-05-15 16:06:21.358916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.056 [2024-05-15 16:06:21.359442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.359798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.359817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.056 [2024-05-15 16:06:21.359828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.056 [2024-05-15 16:06:21.360005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.056 [2024-05-15 16:06:21.360198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.056 [2024-05-15 16:06:21.360221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.056 [2024-05-15 16:06:21.360233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.056 [2024-05-15 16:06:21.362959] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.056 [2024-05-15 16:06:21.371864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.056 [2024-05-15 16:06:21.372434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.372822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.372835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.056 [2024-05-15 16:06:21.372846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.056 [2024-05-15 16:06:21.373019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.056 [2024-05-15 16:06:21.373195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.056 [2024-05-15 16:06:21.373206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.056 [2024-05-15 16:06:21.373216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.056 [2024-05-15 16:06:21.375907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.056 [2024-05-15 16:06:21.384810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.056 [2024-05-15 16:06:21.385377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.385712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.385725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.056 [2024-05-15 16:06:21.385734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.056 [2024-05-15 16:06:21.385907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.056 [2024-05-15 16:06:21.386078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.056 [2024-05-15 16:06:21.386089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.056 [2024-05-15 16:06:21.386098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.056 [2024-05-15 16:06:21.388792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.056 [2024-05-15 16:06:21.397852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.056 [2024-05-15 16:06:21.398365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.398708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.398720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.056 [2024-05-15 16:06:21.398733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.056 [2024-05-15 16:06:21.398905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.056 [2024-05-15 16:06:21.399077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.056 [2024-05-15 16:06:21.399087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.056 [2024-05-15 16:06:21.399096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.056 [2024-05-15 16:06:21.401795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.056 [2024-05-15 16:06:21.410756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.056 [2024-05-15 16:06:21.411353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.056 [2024-05-15 16:06:21.411793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.411806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.411816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.411988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.412161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.412171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.412181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.414877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 [2024-05-15 16:06:21.423793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 [2024-05-15 16:06:21.424383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.424776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.424789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.424798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.424970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.425141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.425152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.425162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.427874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 [2024-05-15 16:06:21.436796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 [2024-05-15 16:06:21.437310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.437706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.437719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.437728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.437903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.438075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.438086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.438095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.440801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 [2024-05-15 16:06:21.449715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 [2024-05-15 16:06:21.450272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.450713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.450727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.450739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.450914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.451091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.451104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.451113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.453812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.057 [2024-05-15 16:06:21.462724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:23.057 [2024-05-15 16:06:21.463294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.463729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.463742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.463752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.463924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.464099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.464110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.464119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.466816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 [2024-05-15 16:06:21.475728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 [2024-05-15 16:06:21.476341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.476680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.476698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.476710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.476882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.477055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.477066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.477076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.479777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 [2024-05-15 16:06:21.488679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 [2024-05-15 16:06:21.489077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.489465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.489479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.489488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.489659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.489834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.489845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.489854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.492555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 [2024-05-15 16:06:21.501618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 [2024-05-15 16:06:21.502136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.502519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.502533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.057 [2024-05-15 16:06:21.502543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.057 [2024-05-15 16:06:21.502714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.057 [2024-05-15 16:06:21.502886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.057 [2024-05-15 16:06:21.502897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.057 [2024-05-15 16:06:21.502906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.057 [2024-05-15 16:06:21.505603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.057 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:23.057 [2024-05-15 16:06:21.512171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.057 [2024-05-15 16:06:21.514515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.057 [2024-05-15 16:06:21.515024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.057 [2024-05-15 16:06:21.515410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.515424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.058 [2024-05-15 16:06:21.515434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.058 [2024-05-15 16:06:21.515606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.058 [2024-05-15 16:06:21.515780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.058 [2024-05-15 16:06:21.515791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.058 [2024-05-15 16:06:21.515800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:23.058 [2024-05-15 16:06:21.518494] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.058 [2024-05-15 16:06:21.527564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.058 [2024-05-15 16:06:21.528077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.528414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.528428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.058 [2024-05-15 16:06:21.528437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.058 [2024-05-15 16:06:21.528609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.058 [2024-05-15 16:06:21.528782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.058 [2024-05-15 16:06:21.528792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.058 [2024-05-15 16:06:21.528801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.058 [2024-05-15 16:06:21.531499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.058 [2024-05-15 16:06:21.540568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.058 [2024-05-15 16:06:21.541156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.541591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.541604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.058 [2024-05-15 16:06:21.541614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.058 [2024-05-15 16:06:21.541785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.058 [2024-05-15 16:06:21.541960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.058 [2024-05-15 16:06:21.541971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.058 [2024-05-15 16:06:21.541983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.058 [2024-05-15 16:06:21.544679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.058 [2024-05-15 16:06:21.553604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.058 [2024-05-15 16:06:21.554180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.554620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.554633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.058 [2024-05-15 16:06:21.554643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.058 [2024-05-15 16:06:21.554816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.058 [2024-05-15 16:06:21.554988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.058 [2024-05-15 16:06:21.554999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.058 [2024-05-15 16:06:21.555008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.058 [2024-05-15 16:06:21.557709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.058 Malloc0 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:23.058 [2024-05-15 16:06:21.566613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.058 [2024-05-15 16:06:21.567249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.567632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.567644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.058 [2024-05-15 16:06:21.567654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.058 [2024-05-15 16:06:21.567826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.058 [2024-05-15 16:06:21.567998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.058 [2024-05-15 16:06:21.568009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.058 [2024-05-15 16:06:21.568018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.058 [2024-05-15 16:06:21.570713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:23.058 [2024-05-15 16:06:21.579622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.058 [2024-05-15 16:06:21.580256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.580692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.058 [2024-05-15 16:06:21.580704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e9f0 with addr=10.0.0.2, port=4420 00:28:23.058 [2024-05-15 16:06:21.580718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65e9f0 is same with the state(5) to be set 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.058 [2024-05-15 16:06:21.580889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e9f0 (9): Bad file descriptor 00:28:23.058 [2024-05-15 16:06:21.581063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.058 [2024-05-15 16:06:21.581073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.058 [2024-05-15 16:06:21.581082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:23.058 [2024-05-15 16:06:21.583778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.058 [2024-05-15 16:06:21.584112] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:23.058 [2024-05-15 16:06:21.584352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.058 16:06:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3916923 00:28:23.058 [2024-05-15 16:06:21.592520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.316 [2024-05-15 16:06:21.665279] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:33.279 00:28:33.279 Latency(us) 00:28:33.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.279 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:33.279 Verification LBA range: start 0x0 length 0x4000 00:28:33.279 Nvme1n1 : 15.01 8563.75 33.45 12423.06 0.00 6079.67 1461.45 23907.53 00:28:33.279 =================================================================================================================== 00:28:33.279 Total : 8563.75 33.45 12423.06 0.00 6079.67 1461.45 23907.53 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:33.279 rmmod nvme_tcp 00:28:33.279 rmmod nvme_fabrics 00:28:33.279 rmmod nvme_keyring 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3917961 ']' 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3917961 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3917961 ']' 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3917961 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917961 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917961' 00:28:33.279 killing process with pid 3917961 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3917961 00:28:33.279 [2024-05-15 16:06:30.509304] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3917961 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.279 16:06:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.655 16:06:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.655 00:28:34.655 real 0m27.811s 00:28:34.655 user 1m3.150s 00:28:34.655 sys 0m8.254s 00:28:34.655 16:06:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:34.655 16:06:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.655 ************************************ 00:28:34.655 END TEST nvmf_bdevperf 00:28:34.655 ************************************ 00:28:34.655 16:06:32 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:34.655 16:06:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:34.655 16:06:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:34.655 16:06:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:34.655 ************************************ 00:28:34.655 START TEST nvmf_target_disconnect 00:28:34.655 ************************************ 00:28:34.655 16:06:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:34.655 * Looking for test storage... 00:28:34.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.655 16:06:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.238 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:41.239 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:41.239 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:41.239 Found net devices under 0000:af:00.0: cvl_0_0 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:41.239 Found net devices under 0000:af:00.1: cvl_0_1 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.239 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:28:41.240 00:28:41.240 --- 10.0.0.2 ping statistics --- 00:28:41.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.240 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:28:41.240 00:28:41.240 --- 10.0.0.1 ping statistics --- 00:28:41.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.240 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:41.240 ************************************ 00:28:41.240 START TEST nvmf_target_disconnect_tc1 00:28:41.240 ************************************ 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.240 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.240 [2024-05-15 16:06:39.668089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-05-15 16:06:39.668684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-05-15 16:06:39.668699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af4b0 with addr=10.0.0.2, port=4420 00:28:41.240 [2024-05-15 16:06:39.668720] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:41.240 [2024-05-15 16:06:39.668731] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:41.240 [2024-05-15 16:06:39.668742] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:41.240 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:41.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:41.240 Initializing NVMe Controllers 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:41.240 00:28:41.240 real 0m0.110s 00:28:41.240 user 0m0.043s 00:28:41.240 sys 0m0.066s 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:41.240 ************************************ 00:28:41.240 END TEST nvmf_target_disconnect_tc1 00:28:41.240 ************************************ 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:41.240 ************************************ 00:28:41.240 START TEST nvmf_target_disconnect_tc2 00:28:41.240 ************************************ 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3923286 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3923286 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3923286 ']' 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:41.240 16:06:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.498 [2024-05-15 16:06:39.827454] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:41.498 [2024-05-15 16:06:39.827499] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.499 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.499 [2024-05-15 16:06:39.915599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.499 [2024-05-15 16:06:39.986372] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.499 [2024-05-15 16:06:39.986412] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.499 [2024-05-15 16:06:39.986422] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.499 [2024-05-15 16:06:39.986430] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.499 [2024-05-15 16:06:39.986437] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.499 [2024-05-15 16:06:39.986560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:41.499 [2024-05-15 16:06:39.986764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:41.499 [2024-05-15 16:06:39.986852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:41.499 [2024-05-15 16:06:39.986854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 Malloc0 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 [2024-05-15 16:06:40.699951] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 [2024-05-15 16:06:40.727964] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:42.432 [2024-05-15 16:06:40.728223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3923521 00:28:42.432 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:42.433 16:06:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.433 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.338 16:06:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3923286 00:28:44.338 16:06:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 [2024-05-15 16:06:42.755903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Read completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.338 Write completed with error (sct=0, sc=8) 00:28:44.338 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 [2024-05-15 16:06:42.756132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 [2024-05-15 16:06:42.756366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Read completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 Write completed with error (sct=0, sc=8) 00:28:44.339 starting I/O failed 00:28:44.339 [2024-05-15 16:06:42.756586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.339 [2024-05-15 16:06:42.757016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.757589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.757633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.339 qpair failed and we were unable to recover it. 00:28:44.339 [2024-05-15 16:06:42.758228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.758662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.758701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.339 qpair failed and we were unable to recover it. 00:28:44.339 [2024-05-15 16:06:42.759257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.759745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.759784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.339 qpair failed and we were unable to recover it. 00:28:44.339 [2024-05-15 16:06:42.760174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.760632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.760672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.339 qpair failed and we were unable to recover it. 00:28:44.339 [2024-05-15 16:06:42.761177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.761710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.761751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.339 qpair failed and we were unable to recover it. 00:28:44.339 [2024-05-15 16:06:42.762277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.763064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.763106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.339 qpair failed and we were unable to recover it. 00:28:44.339 [2024-05-15 16:06:42.763600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.764065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.339 [2024-05-15 16:06:42.764104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.339 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.764503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.764899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.764916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.765387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.765847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.765864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.766385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.766814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.766830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.767225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.767645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.767661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.768158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.768604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.768644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.769202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.769587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.769627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.770128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.770661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.770701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.771109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.771642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.771659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.772060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.772519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.772535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.772924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.773331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.773347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.773715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.774184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.774205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.774616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.775007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.775047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.775523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.775927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.775966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.776442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.776813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.776853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.777367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.777852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.777891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.778437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.778899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.778938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.779455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.779962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.780001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.780547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.780985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.781024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.781571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.782006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.782045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.782590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.782973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.783011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.783502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.784032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.784070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.784582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.785070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.785108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.785606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.786135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.786174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.786718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.787256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.787296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.787835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.788268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.788308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.340 [2024-05-15 16:06:42.788801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.789234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.340 [2024-05-15 16:06:42.789273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.340 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.789749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.790233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.790284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.790702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.791099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.791143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.791686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.792240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.792279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.792797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.793232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.793272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.793714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.794219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.794260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.794790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.795305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.795345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.795791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.796312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.796329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.796798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.797317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.797356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.797912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.798409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.798449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.798972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.799380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.799420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.799935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.800420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.800460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.801006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.801513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.801566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.802133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.802708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.802749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.803306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.803818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.803857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.804350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.804853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.804892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.805354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.805859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.805898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.806435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.806872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.806914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.807450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.807990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.808029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.808521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.809028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.809068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.809572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.810007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.810046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.810588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.811123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.811162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.811688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.812173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.812229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.812774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.813260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.813300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.813816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.814279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.814320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.814787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.815239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.815279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.341 qpair failed and we were unable to recover it. 00:28:44.341 [2024-05-15 16:06:42.815797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.341 [2024-05-15 16:06:42.816305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.816345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.816897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.817398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.817439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.817817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.818156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.818203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.818744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.819186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.819234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.819677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.820114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.820153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.820646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.821133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.821171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.821723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.822174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.822240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.822752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.823264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.823304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.823825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.824283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.824322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.824795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.825301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.825341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.825878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.826363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.826403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.826960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.827462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.827503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.827923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.828435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.828483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.828915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.829372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.829398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.829872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.830377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.830394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.830782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.831250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.831290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.831830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.832342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.832382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.832947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.833463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.833503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.833949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.834461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.834501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.835053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.835559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.835600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.836068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.836590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.836630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.837199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.837712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.837751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.838255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.838748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.838787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.839285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.839775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.839814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.840342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.840878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.840917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.841466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.841904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.841943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.342 [2024-05-15 16:06:42.842450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.842910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.342 [2024-05-15 16:06:42.842949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.342 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.843481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.843987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.844004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.844407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.844917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.844956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.845509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.845963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.846002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.846495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.846958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.846997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.847556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.848023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.848062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.848534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.849046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.849086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.849542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.849960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.849977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.850453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.850891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.850931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.851369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.851879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.851919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.852457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.852852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.852892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.853437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.853908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.853948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.854492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.855008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.855026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.855478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.856013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.856052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.856619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.857072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.857111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.857529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.858051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.858091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.858633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.859178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.859230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.859758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.860258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.860300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.860822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.861328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.861368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.861912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.862354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.862397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.862846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.863356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.343 [2024-05-15 16:06:42.863396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.343 qpair failed and we were unable to recover it. 00:28:44.343 [2024-05-15 16:06:42.863883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.864373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.864414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.864983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.865478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.865495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.865965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.866447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.866487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.866983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.867425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.867464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.868012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.868509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.868550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.868998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.869481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.869521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.870068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.870647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.870687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.871154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.871674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.871715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.872177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.872637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.872677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.873202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.873645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.873685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.874241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.874704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.874744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.875291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.875834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.875873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.876439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.876880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.876919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.877408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.877847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.877886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.878335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.878853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.878893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.879443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.879896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.879935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.880444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.880839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.880878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.881400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.881847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.881886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.882432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.882980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.883019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.883550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.884092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.884131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.884649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.885233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.885274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.885752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.886262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.886303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.886847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.887327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.887368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.887872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.888411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.888451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.888918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.889373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.889414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.889960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.890408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.890448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.344 qpair failed and we were unable to recover it. 00:28:44.344 [2024-05-15 16:06:42.890905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.344 [2024-05-15 16:06:42.891422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.891462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.345 qpair failed and we were unable to recover it. 00:28:44.345 [2024-05-15 16:06:42.891923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.892425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.892448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.345 qpair failed and we were unable to recover it. 00:28:44.345 [2024-05-15 16:06:42.892946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.893461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.893502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.345 qpair failed and we were unable to recover it. 00:28:44.345 [2024-05-15 16:06:42.894058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.894561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.894589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.345 qpair failed and we were unable to recover it. 00:28:44.345 [2024-05-15 16:06:42.895105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.895627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.345 [2024-05-15 16:06:42.895668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.345 qpair failed and we were unable to recover it. 00:28:44.345 [2024-05-15 16:06:42.896238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.896818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.896840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.897247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.897726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.897743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.898213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.898710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.898750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.899329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.899756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.899796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.900328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.900797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.900815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.901289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.901751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.901768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.902227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.902754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.902793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.903368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.903871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.903910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.904456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.904961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.905001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.905566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.906082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.906122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.906704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.907244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.907285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.907802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.908326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.908367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.908895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.909341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.909386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.909925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.910411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.910462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.910930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.911391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.911432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.911963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.912482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.912522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.913017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.913535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.913576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.914059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.914604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.914644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.915175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.915645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.915684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.916220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.916783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.916823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.917358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.917923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.917963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.918471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.918971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.919010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.919510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.920012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.920051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.920494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.920970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.921009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.921552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.922046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.922085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.922618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.923115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.923154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.923693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.924258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.924299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.924753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.925225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.925266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.608 qpair failed and we were unable to recover it. 00:28:44.608 [2024-05-15 16:06:42.925836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.926357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.608 [2024-05-15 16:06:42.926398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.926972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.927493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.927535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.928088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.928504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.928545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.928939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.929436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.929477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.929985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.930526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.930566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.931118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.931659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.931700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.932267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.932789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.932828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.933400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.933960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.933977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.934382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.934857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.934896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.935451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.935986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.936025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.936570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.937095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.937134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.937720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.938243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.938283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.938807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.939328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.939368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.939927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.940387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.940427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.940963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.941469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.941511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.942009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.942469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.942510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.943040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.943485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.943526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.943976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.944458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.944499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.945079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.945598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.945638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.946120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.946587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.946627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.947177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.947646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.947687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.948147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.948684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.948725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.949246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.949774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.949813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.950365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.950908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.950948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.951505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.951953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.951992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.952543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.953097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.953136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.953709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.954230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.954270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.954837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.955253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.955294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.955747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.956260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.956301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.956840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.957346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.957386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.957935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.958457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.958498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.958976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.959419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.959471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.959991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.960436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.960476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.960922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.961388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.961429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.961962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.962481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.962522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.963073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.963592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.963632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.964205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.964643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.964681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.965233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.965680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.965719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.966236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.966767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.966806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.967388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.967908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.967948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.968460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.969005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.969044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.969559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.970078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.970123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.970699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.971223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.971264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.971796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.972318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.972359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.972907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.973368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.973385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.973886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.974336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.974376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.974836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.975355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.975397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.975971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.976425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.976465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.977016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.977519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.977559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.978080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.978502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.978544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.979003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.979498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.609 [2024-05-15 16:06:42.979538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.609 qpair failed and we were unable to recover it. 00:28:44.609 [2024-05-15 16:06:42.979984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.980502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.980522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.981014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.981569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.981609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.982141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.982664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.982705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.983261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.983775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.983814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.984366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.984882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.984921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.985396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.985774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.985813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.986340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.986862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.986901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.987444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.987939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.987978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.988511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.989020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.989059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.989612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.990121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.990160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.990648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.991151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.991203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.991767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.992219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.992260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.992805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.993377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.993418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.993986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.994506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.994550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.995068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.995587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.995627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.996202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.996725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.996765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.997294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.997808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.997847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.998380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.998899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:42.998938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:42.999480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.000039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.000078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.000594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.001096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.001136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.001732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.002212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.002254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.002783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.003281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.003323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.003787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.004303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.004343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.004896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.005401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.005442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.005997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.006503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.006521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.007019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.007479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.007520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.008007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.008551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.008602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.009086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.009551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.009592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.010090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.010520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.010537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.010920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.011395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.011435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.011904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.012308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.012348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.012896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.013397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.013437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.013923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.014441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.014481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.015060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.015571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.015611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.016098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.016576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.016616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.017176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.017650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.017668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.018183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.018732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.018772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.019324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.019825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.019864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.020398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.020794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.020811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.021294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.021843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.021882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.022430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.022870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.022910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.023468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.023894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.023934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.024435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.024918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.024967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.025514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.026017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.026057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.026610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.027130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.027178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.027661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.028134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.028174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.028757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.029279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.029319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.029845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.030342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.030359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.610 qpair failed and we were unable to recover it. 00:28:44.610 [2024-05-15 16:06:43.030857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.610 [2024-05-15 16:06:43.031311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.031351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.031847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.032338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.032378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.032845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.033348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.033389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.033869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.034308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.034326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.034728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.035132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.035171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.035632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.036074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.036113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.036586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.037109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.037148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.037555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.038074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.038113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.038609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.039079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.039118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.039622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.040068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.040108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.040598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.041145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.041187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.041691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.042145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.042185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.042750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.043216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.043257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.043837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.044325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.044366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.044889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.045383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.045424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.045976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.046498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.046538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.047112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.047541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.047581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.048101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.048598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.048615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.049077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.049526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.049567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.050021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.050465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.050506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.051028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.051536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.051577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.052111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.052647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.052688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.053250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.053772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.053811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.054377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.054845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.054884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.055415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.055820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.055860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.056395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.056935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.056974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.057389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.057732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.057771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.058298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.058791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.058830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.059218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.059698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.059737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.060282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.060731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.060748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.061176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.061701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.061740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.062292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.062763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.062803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.063326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.063834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.063873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.064426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.064831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.064869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.065411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.065934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.065982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.066419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.066875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.066915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.067446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.067962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.068002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.068551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.069072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.069111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.069656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.070214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.070255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.611 [2024-05-15 16:06:43.070687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.071137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.611 [2024-05-15 16:06:43.071176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.611 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.071660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.072177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.072230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.072731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.073233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.073275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.073823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.074266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.074307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.074832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.075425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.075466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.075950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.076464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.076504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.077057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.077572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.077613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.078011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.078431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.078471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.078997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.079517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.079558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.080053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.080496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.080536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.081083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.081608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.081649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.082209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.082756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.082795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.083334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.083787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.083826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.084356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.084822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.084861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.085410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.085969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.086009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.086554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.087072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.087111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.087586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.087990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.088029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.088481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.088973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.089013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.089567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.090081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.090121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.090531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.091000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.091039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.091568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.092088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.092128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.092686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.093202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.093243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.093797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.094315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.094355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.094917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.095455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.095495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.096024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.096540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.096558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.096986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.097452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.097493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.098041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.098556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.098596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.099131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.099690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.099732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.100243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.100678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.100695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.101099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.101587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.101627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.102205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.102716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.102756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.103213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.103739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.103778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.104251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.104761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.104778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.105227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.105681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.105720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.106249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.106792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.106832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.107357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.107886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.107925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.108499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.108945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.108984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.109508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.110035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.110074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.110620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.111115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.111154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.111691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.112166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.112216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.112711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.113229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.113270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.113841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.114366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.114407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.114986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.115500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.115540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.116011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.116555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.116595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.117140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.117666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.117718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.118274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.118790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.118829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.119376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.119819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.612 [2024-05-15 16:06:43.119857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.612 qpair failed and we were unable to recover it. 00:28:44.612 [2024-05-15 16:06:43.120407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.120954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.120993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.121545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.121995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.122034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.122478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.122974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.123020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.123466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.123935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.123976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.124411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.124845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.124884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.125392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.125774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.125813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.126372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.126932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.126972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.127462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.127959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.128004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.128500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.129025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.129065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.129633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.130024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.130063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.130615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.131152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.131201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.131756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.132265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.132305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.132832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.133334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.133375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.133844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.134360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.134401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.134883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.135333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.135373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.135892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.136411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.136451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.136842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.137338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.137378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.137885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.138429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.138476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.138956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.139440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.139481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.139935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.140443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.140461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.140924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.141362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.141422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.141879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.142358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.142398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.142951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.143478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.143519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.144053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.144514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.144554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.145013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.145550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.145591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.146123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.146658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.146698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.147216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.147739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.147779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.148350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.148830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.148875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.149378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.149827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.149845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.150247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.150722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.150761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.151341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.151785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.151825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.152276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.152793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.152833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.153386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.153828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.153867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.154355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.154896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.154936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.155416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.155934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.155973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.156548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.157070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.157110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.157615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.158119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.158167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.158645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.159119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.159158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.159581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.160104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.160143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.160728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.161303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.161344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.161809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.162179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.162231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.162661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.163109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.163152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.163695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.164135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.613 [2024-05-15 16:06:43.164159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.613 qpair failed and we were unable to recover it. 00:28:44.613 [2024-05-15 16:06:43.164557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.614 [2024-05-15 16:06:43.164897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.614 [2024-05-15 16:06:43.164938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.614 qpair failed and we were unable to recover it. 00:28:44.614 [2024-05-15 16:06:43.165400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.165926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.165948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.166350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.166772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.166790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.167188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.167570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.167610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.168081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.168540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.168559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.168991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.169462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.169503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.170036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.170545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.170586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.171117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.171581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.171622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.172134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.172672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.172713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.173109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.173534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.173576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.174028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.174548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.174589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.175093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.175665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.175705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.176131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.176602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.176642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.177099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.177546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.177588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.177968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.178346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.178365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.178563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.178933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.178974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.880 [2024-05-15 16:06:43.179501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.179870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.880 [2024-05-15 16:06:43.179909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.880 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.180377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.180758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.180797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.181241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.181754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.181793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.182240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.182604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.182644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.183030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.183476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.183492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.183969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.184426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.184466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.184870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.185362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.185404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.185943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.186385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.186425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.186917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.187315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.187363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.187818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.188184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.188234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.188755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.189203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.189244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.189773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.190223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.190264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.190711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.191176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.191240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.191629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.192048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.192087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.192608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.193125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.193164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.193632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.194145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.194184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.194725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.195237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.195279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.195806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.196205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.196222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.196670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.197109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.197148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.197608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.198150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.198199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.198700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.199142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.199181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.199652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.200147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.200187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.200647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.201174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.201226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.201789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.202315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.202356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.202777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.203290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.203331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.203807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.204310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.204350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.881 qpair failed and we were unable to recover it. 00:28:44.881 [2024-05-15 16:06:43.204836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.205355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.881 [2024-05-15 16:06:43.205395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.205957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.206449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.206490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.207038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.207477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.207517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.208000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.208518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.208572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.209147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.209571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.209611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.210055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.210568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.210609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.211168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.211698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.211738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.212287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.212679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.212719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.213243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.213790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.213830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.214310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.214763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.214803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.215266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.215664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.215703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.216226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.216720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.216737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.217149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.217620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.217660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.218169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.218700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.218740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.219260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.219708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.219725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.220124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.220573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.220593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.221075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.221513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.221555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.222026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.222551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.222591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.223112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.223614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.223656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.224241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.224741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.224780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.225333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.225776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.225815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.226365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.226912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.226952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.227461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.227913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.227952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.228490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.228935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.228974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.229483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.230006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.230045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.230602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.231097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.231114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.882 [2024-05-15 16:06:43.231571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.231954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.882 [2024-05-15 16:06:43.231993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.882 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.232438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.232955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.232999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.233551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.234065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.234105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.234375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.234819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.234836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.235234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.235757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.235796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.236324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.236811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.236851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.237106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.237554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.237571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.238049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.238520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.238560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.239088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.239482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.239522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.240032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.240574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.240615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.241071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.241558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.241599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.242102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.242537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.242577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.243103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.243546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.243587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.244064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.244563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.244604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.245105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.245649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.245689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.246161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.246646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.246686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.247151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.247630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.247671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.248121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.248633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.248674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.249206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.249729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.249768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.250278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.250658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.250697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.251220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.251728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.251767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.252246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.252691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.252730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.253115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.253626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.253666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.254085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.254539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.254556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.255004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.255427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.255468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.255990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.256502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.256542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.883 qpair failed and we were unable to recover it. 00:28:44.883 [2024-05-15 16:06:43.257044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.883 [2024-05-15 16:06:43.257568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.257607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.258103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.258599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.258640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.259071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.259591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.259630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.260030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.260583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.260624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.261024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.261530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.261570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.262019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.262555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.262595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.263121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.263601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.263641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.264177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.264694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.264711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.265121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.265465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.265504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.265994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.266419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.266459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.266985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.267389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.267429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.267966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.268435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.268476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.268970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.269447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.269487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.269996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.270429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.270447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.270859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.271401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.271442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.272007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.272441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.272458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.272667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.272985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.273002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.273393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.273845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.273884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.274404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.274867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.274906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.275359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.275878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.275918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.276454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.276947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.276964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.277448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.277843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.277862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.278254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.278669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.278708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.279244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.279760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.279777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.280277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.280806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.280844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.281360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.281784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.884 [2024-05-15 16:06:43.281824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.884 qpair failed and we were unable to recover it. 00:28:44.884 [2024-05-15 16:06:43.282354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.282800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.282817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.283315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.283776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.283816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.284261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.284758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.284797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.285350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.285789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.285805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.286217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.286610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.286627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.287055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.287517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.287563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.287962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.288484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.288501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.288977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.289390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.289429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.289933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.290395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.290435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.290914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.291428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.291469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.291983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.292358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.292398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.292982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.293521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.293562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.294039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.294566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.294606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.295164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.295708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.295749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.296274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.296717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.296757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.297308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.297816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.297861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.298384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.298879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.298918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.299436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.299937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.299976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.300481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.300976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.301015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.301450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.301895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.301945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.885 qpair failed and we were unable to recover it. 00:28:44.885 [2024-05-15 16:06:43.302399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.302915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.885 [2024-05-15 16:06:43.302954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.303509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.304046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.304084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.304663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.305161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.305209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.305793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.306308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.306350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.306927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.307451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.307492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.308015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.308561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.308606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.309162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.309641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.309681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.310235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.310706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.310745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.311278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.311812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.311851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.312379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.312881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.312920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.313475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.313924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.313963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.314488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.314935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.314975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.315504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.316080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.316120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.316622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.317142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.317181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.317657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.318181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.318243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.318729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.319172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.319224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.319664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.320175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.320225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.320776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.321284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.321324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.321877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.322322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.322362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.322890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.323362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.323403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.323942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.324452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.324492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.325048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.325550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.325599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.326010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.326490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.326531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.327103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.327610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.327651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.328113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.328553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.328593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.329119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.329640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.329681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.886 [2024-05-15 16:06:43.330098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.330628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.886 [2024-05-15 16:06:43.330668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.886 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.331241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.331742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.331781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.332261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.332693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.332732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.333214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.333734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.333773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.334341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.334787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.334827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.335341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.335782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.335822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.336296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.336803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.336820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.337282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.337831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.337870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.338430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.338944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.338984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.339541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.340054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.340093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.340651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.341096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.341135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.341678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.342109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.342148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.342703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.343224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.343266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.343763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.344283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.344324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.344819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.345337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.345377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.345855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.346303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.346343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.346873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.347350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.347390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.347949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.348407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.348447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.348934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.349455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.349519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.350029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.350490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.350530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.351004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.351406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.351447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.351989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.352509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.352549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.353055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.353503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.353544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.353982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.354430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.354471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.354977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.355428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.355469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.355971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.356513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.356553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.887 qpair failed and we were unable to recover it. 00:28:44.887 [2024-05-15 16:06:43.357107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.887 [2024-05-15 16:06:43.357627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.357668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.358241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.358751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.358790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.359338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.359860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.359899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.360433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.360901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.360939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.361472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.361973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.362011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.362564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.363081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.363120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.363709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.364274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.364315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.364844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.365346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.365387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.365944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.366460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.366501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.367011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.367554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.367594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.368029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.368426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.368468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.369011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.369584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.369625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.370222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.370672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.370712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.371266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.371812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.371852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.372431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.372948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.372987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.373559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.374054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.374093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.374648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.375166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.375218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.375759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.376207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.376247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.376719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.377235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.377276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.377851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.378378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.378419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.378945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.379493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.379533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.379907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.380402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.380443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.380996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.381517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.381558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.382080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.382596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.382635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.383159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.383567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.383608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.384147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.384675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.384716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.888 [2024-05-15 16:06:43.385242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.385791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.888 [2024-05-15 16:06:43.385830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.888 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.386335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.386861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.386901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.387367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.387865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.387905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.388371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.388894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.388933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.389478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.390022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.390061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.390612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.391062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.391101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.391628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.392148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.392188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.392772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.393295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.393336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.393893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.394426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.394466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.395015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.395544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.395584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.396106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.396611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.396651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.397170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.397636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.397675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.398124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.398664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.398704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.399240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.399771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.399810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.400369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.400896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.400935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.401512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.402031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.402071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.402583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.403032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.403072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.403623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.404141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.404181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.404694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.405225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.405266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.405843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.406360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.406400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.406964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.407479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.407519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.408068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.408564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.408605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.409108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.409626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.409666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.410224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.410708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.410748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.411321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.411842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.411882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.412385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.412903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.412942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.413482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.414007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.889 [2024-05-15 16:06:43.414047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.889 qpair failed and we were unable to recover it. 00:28:44.889 [2024-05-15 16:06:43.414582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.415009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.415049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.415576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.416120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.416160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.416742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.417262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.417303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.417873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.418393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.418433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.419004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.419524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.419564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.420102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.420615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.420655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.421211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.421729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.421767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.422274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.422816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.422855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.423373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.423819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.423859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.424347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.424889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.424928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.425479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.425998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.426038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.426487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.427021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.427061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.427632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.428154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.428219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.428694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.429110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.429150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.429577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.430076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.430115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.430720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.431288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.431309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.431821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.432371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.432412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.432887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.433409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.433431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.433908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.434435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.890 [2024-05-15 16:06:43.434476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:44.890 qpair failed and we were unable to recover it. 00:28:44.890 [2024-05-15 16:06:43.435029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.435516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.435535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.436019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.436516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.436533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.437023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.437588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.437629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.438104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.438601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.438648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.439111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.439649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.439691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.440213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.440737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.440776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.441354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.441871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.441910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.442465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.442982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.443021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.443571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.444094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.444133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.444700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.445148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.445187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.445731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.446252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.446293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.446843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.447343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.447384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.447903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.448417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.448464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.449037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.449495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.449535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.450063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.450584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.450624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.451185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.451720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.451760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.452326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.452848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.452887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.453448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.453916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.453956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.454509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.455057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.455102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.455611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.456080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.456101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.456615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.457139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.155 [2024-05-15 16:06:43.457179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.155 qpair failed and we were unable to recover it. 00:28:45.155 [2024-05-15 16:06:43.457698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.458214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.458255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.458787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.459258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.459311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.459862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.460364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.460404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.460886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.461316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.461357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.461829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.462347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.462388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.462960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.463482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.463523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.464094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.464591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.464632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.465162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.465651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.465691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.466235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.466698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.466737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.467267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.467835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.467875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.468381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.468912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.468951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.469456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.469904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.469963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.470445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.470986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.471025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.471551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.471997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.472049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.472508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.472980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.473018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.473501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.473998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.474037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.474590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.475104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.475144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.475725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.476209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.476250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.476777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.477294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.477335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.477809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.478257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.478298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.478827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.479344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.479361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.479813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.480290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.480330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.480915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.481434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.481475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.481958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.482475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.482514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.483109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.483669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.483710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.484244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.484779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.156 [2024-05-15 16:06:43.484818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.156 qpair failed and we were unable to recover it. 00:28:45.156 [2024-05-15 16:06:43.485342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.485867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.485906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.486377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.486920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.486959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.487418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.487904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.487943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.488497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.489040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.489089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.489519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.490013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.490053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.490588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.491038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.491078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.491616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.492134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.492173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.492660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.493179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.493232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.493789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.494328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.494346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.494796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.495314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.495355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.495914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.496435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.496476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.496951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.497475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.497515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.498025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.498565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.498605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.499161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.499682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.499722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.500165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.500684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.500701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.501207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.501611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.501651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.502215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.502759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.502799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.503284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.503799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.503839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.504389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.504859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.504898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.505351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.505826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.505868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.506242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.506722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.506761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.507336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.507882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.507921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.508474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.508918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.508957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.509501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.510001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.510039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.510610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.511048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.511087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.511536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.512053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.512091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.157 qpair failed and we were unable to recover it. 00:28:45.157 [2024-05-15 16:06:43.512575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.157 [2024-05-15 16:06:43.513073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.513112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.513600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.514121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.514160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.514718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.515158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.515208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.515636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.516130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.516169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.516706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.517230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.517271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.517707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.518187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.518239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.518716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.519239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.519280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.519649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.520085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.520124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.520579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.521057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.521074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.521497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.521998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.522038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.522572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.523120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.523159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.523634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.524159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.524214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.524658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.525141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.525180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.525595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.525999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.526038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.526614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.527152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.527169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.527592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.527997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.528041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.528458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.528917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.528956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.529482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.529931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.529972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.530477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.530925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.530964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.531474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.531926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.531965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.532421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.532805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.532845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.533372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.533815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.533855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.534407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.534906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.534949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.535356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.535846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.535884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.536414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.536916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.536955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.158 [2024-05-15 16:06:43.537407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.537884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.158 [2024-05-15 16:06:43.537925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.158 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.538443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.538940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.538957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.539393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.539866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.539904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.540438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.540888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.540927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.541396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.541913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.541952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.542376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.542831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.542870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.543334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.543853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.543893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.544468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.544991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.545032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.545573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.547505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.547545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.548016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.548470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.548489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.548872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.549276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.549316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.549836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.550382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.550424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.551003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.551445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.551486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.552020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.552586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.552605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.553012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.554620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.554654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.555207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.555522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.555540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.555916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.556415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.556456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.556828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.557173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.557243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.557839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.558343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.558384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.558893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.559410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.559450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.560032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.560518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.560559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.560952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.561402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.561443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.561742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.562090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.562129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.562596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.563071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.563111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.563602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.564050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.564090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.564640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.565027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.565044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.159 qpair failed and we were unable to recover it. 00:28:45.159 [2024-05-15 16:06:43.565497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.159 [2024-05-15 16:06:43.565967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.566007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.566561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.567104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.567148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.567598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.568072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.568112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.568649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.569026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.569065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.569591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.570058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.570109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.570512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.570972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.571011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.571567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.572106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.572146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.572714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.573246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.573264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.573762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.574105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.574122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.575303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.575758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.575779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.576186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.576674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.576715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.577251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.577753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.577793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.578339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.578744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.578761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.579732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.580252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.580299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.580793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.581314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.581376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.581852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.582273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.582295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.582742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.583150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.583169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.583551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.583947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.583964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.584307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.584681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.584698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.585130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.585626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.585645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.585985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.586417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.586437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.586899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.587386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.160 [2024-05-15 16:06:43.587404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.160 qpair failed and we were unable to recover it. 00:28:45.160 [2024-05-15 16:06:43.587884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.588293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.588311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.588785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.589110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.589128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.589542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.590014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.590031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.590540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.590994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.591013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.591419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.591822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.591839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.592277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.592626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.592643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.593112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.593628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.593646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.594083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.594546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.594564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.594992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.595424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.595442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.595801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.596199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.596217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.596621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.597053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.597070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.597492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.597944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.597962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.598358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.598870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.598909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.599471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.599876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.599915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.600332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.600731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.600770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.601250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.601713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.601753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.602229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.602624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.602663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.603114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.603651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.603702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.604287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.604811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.604850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.605355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.605877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.605917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.606491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.606885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.606924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.607467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.607991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.608030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.608562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.609056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.609095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.609496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.609978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.609995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.610457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.610931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.610949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.611359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.611833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.611850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.161 qpair failed and we were unable to recover it. 00:28:45.161 [2024-05-15 16:06:43.612327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.161 [2024-05-15 16:06:43.612834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.612852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.613205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.613623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.613644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.613904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.614375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.614393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.614874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.615377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.615395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.615917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.616389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.616406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.616747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.617226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.617243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.617654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.618120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.618138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.618650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.619015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.619032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.619525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.620046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.620063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.620561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.620901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.620918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.621317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.621693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.621710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.622234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.622633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.622654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.623116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.623497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.623515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.623970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.624330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.624347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.624799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.625236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.625254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.625618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.625970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.625988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.626440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.626838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.626855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.627276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.627654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.627672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.628069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.628460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.628478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.628881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.629283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.629301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.629757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.630284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.630301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.630696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.631179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.631207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.631616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.632038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.632055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.632479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.632805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.632824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.633310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.633710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.633728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.634124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.634599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.634616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.162 [2024-05-15 16:06:43.635124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.635580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-05-15 16:06:43.635598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.162 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.635955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.636432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.636450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.636855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.637246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.637263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.637622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.638025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.638043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.638429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.638824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.638841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.639320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.639739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.639756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.640177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.640609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.640627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.641084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.641547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.641564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.641991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.642468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.642485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.642914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.643391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.643409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.643839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.644245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.644263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.644740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.645257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.645276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.645652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.646119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.646136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.646593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.647003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.647019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.647490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.647828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.647845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.648347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.648702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.648720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.649139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.649499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.649516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.649927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.650351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.650369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.650825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.651351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.651369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.651777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.652187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.652215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.652569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.653044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.653061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.653412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.653784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.653801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.654279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.654632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.654649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.654993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.655443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.655462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.655939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.656453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.656470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.656876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.657352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.657369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.657830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.658279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-05-15 16:06:43.658297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.163 qpair failed and we were unable to recover it. 00:28:45.163 [2024-05-15 16:06:43.658727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.659206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.659224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.659709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.660171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.660187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.660692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.661141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.661158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.661646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.662141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.662157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.662623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.663072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.663090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.663557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.664059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.664076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.664571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.665019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.665036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.665471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.665870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.665888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.666282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.666673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.666689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.667163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.667634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.667651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.668138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.668528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.668545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.669002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.669455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.669471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.669874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.670350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.670367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.670817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.671253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.671271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.671716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.672198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.672215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.672699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.673122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.673140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.673597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.673983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.673999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.674461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.674806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.674823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.675219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.675711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.675728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.676206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.676694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.676710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.677134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.677583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.677599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.678042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.678515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.678533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.678991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.679458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.679476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.679869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.680344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.680361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.680865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.681339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-05-15 16:06:43.681357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.164 qpair failed and we were unable to recover it. 00:28:45.164 [2024-05-15 16:06:43.681776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.682268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.682286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.682755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.683157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.683175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.683599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.683989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.684005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.684473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.685002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.685018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.685446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.685849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.685866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.686271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.686641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.686658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.687062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.687547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.687563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.687960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.688409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.688427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.688896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.689307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.689325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.689667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.690104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.690121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.690549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.691013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.691030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.691478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.691867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.691884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.692341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.692806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.692823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.693242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.693637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.693653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.694046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.694529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.694546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.695039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.695479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.695495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.695856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.696295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.696312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.696651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.697046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.697063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.697549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.697889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.697905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.698394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.698729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.698746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.699158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.699648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.699666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.700128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.700540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.700557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.165 [2024-05-15 16:06:43.701026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.701504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.165 [2024-05-15 16:06:43.701521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.165 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.701970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.702416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.702434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f64000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.702543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x652140 is same with the state(5) to be set 00:28:45.166 [2024-05-15 16:06:43.703050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.703558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.703581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.704087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.704517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.704535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.704947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.705367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.705385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.705780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.706123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.706140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.706525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.706829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.706845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.707299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.707750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.707767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.708239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.708647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.708665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.709142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.709524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.709541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.709936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.710412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.710433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.710782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.711199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.166 [2024-05-15 16:06:43.711216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.166 qpair failed and we were unable to recover it. 00:28:45.166 [2024-05-15 16:06:43.711667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.712064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.712102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.712651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.713056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.713074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.713541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.713955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.713972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.714444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.714863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.714879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.715331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.715771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.715788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.716178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.716634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.716651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.717015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.717487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.717504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.717900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.718382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.718401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.718847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.719290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.719308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.719662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.720023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.720040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-05-15 16:06:43.720462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.720872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.431 [2024-05-15 16:06:43.720888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.721291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.721678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.721695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.722168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.722560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.722577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.722967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.723355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.723372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.723763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.724232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.724249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.724703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.725143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.725160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.725606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.726062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.726079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.726501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.726888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.726904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.727290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.727752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.727768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.728215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.728606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.728623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.729097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.729438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.729455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.729945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.730334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.730351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.730802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.731240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.731256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.731696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.732157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.732173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.732601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.732986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.733004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.733398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.733846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.733862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.734325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.734713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.734729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.735066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.735502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.735519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.735915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.736360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.736385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.736792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.737234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.737250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.737593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.737982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.738001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.738449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.738795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.738811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.739304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.739693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.739709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-05-15 16:06:43.740163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.432 [2024-05-15 16:06:43.740569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.740586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.741027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.741466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.741482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.741883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.742264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.742281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.742719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.743129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.743145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.743530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.743916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.743933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.744387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.744776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.744792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.745244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.745580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.745596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.746005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.746471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.746490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.746949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.747407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.747424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.747909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.748258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.748275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.748758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.749210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.749227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.749571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.749918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.749935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.750386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.750822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.750838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.751306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.751711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.751728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.752125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.752522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.752538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.752933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.753397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.753413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.753817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.754254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.754270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.754640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.754965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.754981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.755448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.755885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.755901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.756306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.756694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.756710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.757135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.757465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.757481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.757921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.758325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.758341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.758678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.759010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.759027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.759440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.759825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.759841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.760294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.760635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.760651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.761041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.761525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.761541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.433 qpair failed and we were unable to recover it. 00:28:45.433 [2024-05-15 16:06:43.761982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.433 [2024-05-15 16:06:43.762434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.762451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.762851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.763322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.763350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.763885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.764248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.764272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.764743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.765215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.765234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.765703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.766201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.766218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.766609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.767061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.767077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.767479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.767905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.767921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.768414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.768850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.768867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.769260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.769704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.769720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.770184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.770601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.770618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.771095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.771482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.771498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.771854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.772310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.772326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.772789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.773308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.773325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.773804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.774174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.774195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.774597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.775027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.775043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.775505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.775897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.775914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.776380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.776789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.776805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.777267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.777674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.777690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.778164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.778581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.778597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.434 qpair failed and we were unable to recover it. 00:28:45.434 [2024-05-15 16:06:43.779037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.434 [2024-05-15 16:06:43.779446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.779462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.779943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.780429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.780446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.780913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.781391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.781407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.781809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.782202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.782218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.782641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.783081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.783097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.783575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.783955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.783972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.784431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.784842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.784858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.785246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.785702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.785718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.786110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.786501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.786518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.786863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.787250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.787267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.787659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.788113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.788129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.788585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.788948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.788964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.789404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.789826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.789842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.790290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.790735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.790751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.791239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.791570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.791586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.791974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.792366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.792382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.792828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.793256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.793272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.793678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.794158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.794174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.794640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.795165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.795181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.795631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.796019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.796035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.796489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.796813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.796830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.797241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.797585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.797600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.797979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.798416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.798433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.798777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.799224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.799241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.799621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.799966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.799982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.800448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.800836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.435 [2024-05-15 16:06:43.800852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.435 qpair failed and we were unable to recover it. 00:28:45.435 [2024-05-15 16:06:43.801256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.801596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.801613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.802004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.802389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.802405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.802794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.803164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.803180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.803643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.804165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.804182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.804582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.804921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.804937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.805354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.805792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.805808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.806270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.806684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.806700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.807046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.807517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.807533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.807990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.808396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.808413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.808780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.809200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.809217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.809562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.809874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.809890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.810222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.810657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.810674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.811122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.811571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.811588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.811994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.812395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.812411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.812845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.813320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.813337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.813835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.814240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.814257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.814713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.815125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.815141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.815603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.815984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.816005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.816466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.816827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.816843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.817241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.817677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.817693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.818184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.818693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.818710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.819125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.819508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.819525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.819914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.820369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.820385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.436 qpair failed and we were unable to recover it. 00:28:45.436 [2024-05-15 16:06:43.820773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.821283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.436 [2024-05-15 16:06:43.821299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.821696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.822142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.822158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.822542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.822929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.822945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.823408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.823793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.823809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.824198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.824651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.824670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.825119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.825508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.825525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.825961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.826394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.826410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.826850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.827283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.827300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.827737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.828275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.828292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.828780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.829292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.829309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.829785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.830256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.830273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.830725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.831126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.831142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.831528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.831914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.831929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.832265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.832661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.832677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.833121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.833520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.437 [2024-05-15 16:06:43.833539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.437 qpair failed and we were unable to recover it. 00:28:45.437 [2024-05-15 16:06:43.833876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.834293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.834310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.834796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.835183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.835204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.835655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.836000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.836016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.836406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.836795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.836811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.837189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.837580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.837596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.837976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.838341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.838358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.838748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.839155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.839171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.839567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.839911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.839928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.840378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.840812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.840828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.841292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.841725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.841742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.842203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.842609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.842625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.843063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.843543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.843559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.843953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.844419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.844436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.844840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.845337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.845354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.845689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.846096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.846113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.846537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.846999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.847016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.847504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.847888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.847904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.848312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.848703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.848719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.849185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.849520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.849537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.850026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.850413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.850430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.850773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.851253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.851270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.851756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.852170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.852186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.852637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.853106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.853122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.853533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.853924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.853939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.854399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.854876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.854892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.855371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.855752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.855769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.438 qpair failed and we were unable to recover it. 00:28:45.438 [2024-05-15 16:06:43.856252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.438 [2024-05-15 16:06:43.856710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.856727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.857211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.857603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.857619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.858065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.858556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.858573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.858963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.859377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.859394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.859881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.860375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.860392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.860803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.861134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.861150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.861480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.861913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.861929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.862396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.862799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.862815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.863278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.863689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.863705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.864133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.864571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.864587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.864991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.865413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.865430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.865813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.866272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.866289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.866686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.867156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.867172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.867597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.868003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.868019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.868411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.868866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.868883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.869327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.869723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.869740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.870221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.870603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.870620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.871053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.871536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.871552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.872024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.872437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.872454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.872848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.873279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.873296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.873702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.874175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.874203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.874643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.875057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.875074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.875463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.875827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.875844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.876231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.876668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.876684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.877121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.877616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.877633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.439 [2024-05-15 16:06:43.877975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.878442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.439 [2024-05-15 16:06:43.878458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.439 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.878831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.879267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.879284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.879622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.880078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.880094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.880488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.880915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.880932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.881426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.881863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.881879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.882350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.882757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.882773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.883204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.883585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.883601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.884042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.884430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.884447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.884909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.885403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.885420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.885843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.886239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.886256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.886646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.887146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.887162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.887516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.888000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.888016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.888492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.888942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.888958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.889398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.889782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.889798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.890249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.890571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.890588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.891013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.891479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.891496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.891955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.892416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.892432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.892820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.893277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.893294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.893687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.894185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.894206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.894542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.894923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.894939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.895401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.895863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.895879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.896371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.896758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.896774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.897233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.897621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.897637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.898082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.898535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.898552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.898944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.899325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.899341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.899669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.900081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.900098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.900482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.900943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.440 [2024-05-15 16:06:43.900958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.440 qpair failed and we were unable to recover it. 00:28:45.440 [2024-05-15 16:06:43.901282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.901674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.901690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.902158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.902549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.902565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.902963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.903417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.903434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.903818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.904275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.904292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.904686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.905075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.905092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.905531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.905917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.905933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.906392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.906827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.906844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.907282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.907619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.907635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.908064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.908556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.908572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.908894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.909352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.909369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.909769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.910235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.910252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.910595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.910994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.911010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.911456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.911851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.911868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.912313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.912772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.912788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.913234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.913690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.913707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.914197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.914622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.914638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.914980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.915442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.915458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.915850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.916237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.916254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.916643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.917099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.917115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.917563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.917949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.917965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.918420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.918809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.918825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.919295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.919774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.919790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.920282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.920723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.920740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.921213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.921686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.921702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.922023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.922477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.922493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.441 [2024-05-15 16:06:43.922879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.923273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.441 [2024-05-15 16:06:43.923289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.441 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.923702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.924128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.924144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.924564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.925000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.925016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.925478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.925872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.925889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.926269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.926728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.926744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.927135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.927526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.927542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.927937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.928394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.928411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.928802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.929242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.929258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.929658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.929993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.930009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.930408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.930822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.930838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.931240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.931626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.931641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.931982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.932390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.932406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.932840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.933274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.933291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.933690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.934155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.934171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.934641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.934977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.934993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.935429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.935750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.935766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.936211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.936645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.936662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.937088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.937505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.937521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.937962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.938355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.938372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.938757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.939150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.939166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.939641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.940110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.940126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.940586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.941043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.941059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.442 qpair failed and we were unable to recover it. 00:28:45.442 [2024-05-15 16:06:43.941438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.442 [2024-05-15 16:06:43.941819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.941835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.942298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.942754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.942770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.943122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.943564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.943581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.943881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.944267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.944283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.944673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.945128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.945144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.945584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.945990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.946009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.946468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.946849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.946865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.947322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.947735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.947751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.948247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.948619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.948636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.949019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.949407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.949423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.949792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.950185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.950207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.950669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.951074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.951091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.951453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.951815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.951831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.952219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.952623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.952640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.953060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.953437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.953453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.953869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.954335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.954354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.954837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.955227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.955244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.955678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.956135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.956151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.956613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.957055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.957071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.957536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.957995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.958012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.958392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.958842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.958858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.959279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.959610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.959626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.960093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.960537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.960554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.960990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.961424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.961440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.961902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.962349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.962366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.962808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.963254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.963274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.443 qpair failed and we were unable to recover it. 00:28:45.443 [2024-05-15 16:06:43.963668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.443 [2024-05-15 16:06:43.964104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.964120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.964583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.964912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.964928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.965391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.965777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.965793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.966182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.966648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.966664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.966992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.967447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.967463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.967849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.968256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.968273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.968594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.969008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.969023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.969421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.969814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.969830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.970303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.970704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.970720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.971212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.971706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.971725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.972140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.972525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.972541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.972933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.973369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.973395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.973837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.974302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.974319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.974737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.975206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.975223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.975586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.975975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.975991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.976457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.976837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.976853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.977296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.977625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.977642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.977983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.978440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.978458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.978850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.979261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.979277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.979668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.980091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.980107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.980587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.980975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.980991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.981427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.981832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.981849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.982314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.982723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.982739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.983196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.983590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.983615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.983943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.984376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.984393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.984773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.985157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.985173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.444 qpair failed and we were unable to recover it. 00:28:45.444 [2024-05-15 16:06:43.985548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.985991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.444 [2024-05-15 16:06:43.986008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.445 qpair failed and we were unable to recover it. 00:28:45.445 [2024-05-15 16:06:43.986520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.445 [2024-05-15 16:06:43.986959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.445 [2024-05-15 16:06:43.986975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.445 qpair failed and we were unable to recover it. 00:28:45.714 [2024-05-15 16:06:43.987426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.987805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.987827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-05-15 16:06:43.988278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.988622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.988639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-05-15 16:06:43.989012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.989471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.989488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-05-15 16:06:43.989894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.990285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.990301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-05-15 16:06:43.990749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.991142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-05-15 16:06:43.991158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.991520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.991959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.991975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.992436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.992813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.992829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.993206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.993675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.993692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.994113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.994522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.994538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.994928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.995386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.995402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.995740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.996202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.996219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.996608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.997072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.997088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.997526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.997987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.998003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.998340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.998778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.998794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:43.999322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.999722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:43.999738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.000232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.000627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.000643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.000974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.001301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.001317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.001753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.002138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.002154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.002545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.003002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.003018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.003505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.003839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.003855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.004318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.004776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.004792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.005241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.005654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.005671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.006146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.006551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.006567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.006911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.007394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.007411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.007852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.008290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.008307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.008736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.009216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.009232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.009625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.010123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.010147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.010648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.011115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.011131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.011569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.011963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.011979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.012418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.012877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-05-15 16:06:44.012894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-05-15 16:06:44.013262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.013642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.013658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.014108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.014577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.014593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.014995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.015432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.015452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.015922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.016394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.016411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.016865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.017261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.017277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.017720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.018096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.018113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.018499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.018886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.018902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.019310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.019694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.019712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.020174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.020648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.020668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.021116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.021509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.021530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.021953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.022422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.022441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.022884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.023353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.023379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.023778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.024221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.024240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.024626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.025089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.025108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.025553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.025887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.025910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.026320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.026763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.026779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.027170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.027638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.027655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.028051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.028430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.028447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.028862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.029258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.029275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.029723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.030203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.030220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.030664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.031079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.031095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.031585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.032022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.032037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.032568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.032979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.032996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.033444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.033838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.033854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.034327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.034765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.034781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-05-15 16:06:44.035247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.035583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-05-15 16:06:44.035599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.035941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.036329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.036346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.036768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.037211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.037228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.037641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.038032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.038048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.038425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.038835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.038852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.039325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.039708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.039724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.040200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.040686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.040702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.041115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.041515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.041532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.041983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.042416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.042432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.042829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.043287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.043303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.043642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.043957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.043974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.044378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.044874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.044891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.045356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.045697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.045713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.046111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.046502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.046519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.046864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.047301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.047318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.047703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.048159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.048175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.048576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.049037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.049053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.049495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.049891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.049911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.050378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.050793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.050809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.051295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.051632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.051648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.052003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.052416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.052432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.052819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.053254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.053271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.053685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.054149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.054166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.054498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.054888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.054904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.055251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.055666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.055683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.056018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.056476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.056493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-05-15 16:06:44.056977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-05-15 16:06:44.057363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.057379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.057762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.058131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.058147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.058527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.058984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.059000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.059339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.059729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.059746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.060136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.060571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.060588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.061052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.061435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.061452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.061832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.062290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.062306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.062741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.063136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.063153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.063604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.064058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.064074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.064536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.064927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.064943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.065395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.065786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.065802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.066203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.066631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.066647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.067037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.067487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.067504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.067846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.068305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.068321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.068728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.069148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.069164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.069611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.070000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.070017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.070472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.070886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.070902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.071313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.071726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.071742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.072199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.072622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.072638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.073083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.073486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.073502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.073895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.074297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.074314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.074693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.075159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.075175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.075653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.076141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.076158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.076631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.077111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.077127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-05-15 16:06:44.077539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.077877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-05-15 16:06:44.077893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.078382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.078765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.078781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.079187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.079583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.079600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.079990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.080426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.080443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.080785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.081273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.081290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.081779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.082262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.082278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.082672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.083004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.083020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.083482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.083874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.083890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.084287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.084679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.084695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.085150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.085524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.085540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.085881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.086350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.086367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.086829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.087328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.087344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.087783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.088178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.088197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.088552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.088893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.088909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.089297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.089731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.089747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.090267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.090653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.090669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.091003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.091418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.091435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.091816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.092278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.092295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.092623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.093038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.093054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.093430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.093819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.093836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.094229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.094675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.094691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.095093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.095479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.095496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-05-15 16:06:44.095821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.096278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-05-15 16:06:44.096294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.096672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.097011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.097027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.097517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.097953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.097969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.098310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.098718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.098735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.099145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.099601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.099617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.100006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.100499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.100516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.100902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.101358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.101375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.101800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.102205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.102221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.102613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.103000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.103016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.103456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.103894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.103910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.104350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.104797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.104814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.105200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.105539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.105556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.105943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.106377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.106394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.106830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.107215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.107232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.107690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.108147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.108163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.108583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.109018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.109034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.109493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.109876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.109893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.110335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.110731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.110748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.111232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.111562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.111577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.111922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.112300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.112317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.112783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.113237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.113254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-05-15 16:06:44.113683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-05-15 16:06:44.114140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.114156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.114560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.114942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.114957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.115396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.115847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.115863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.116253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.116687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.116703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.117165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.117552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.117568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.117983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.118325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.118351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.118764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.119153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.119169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.119514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.119902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.119919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.120283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.120695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.120711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.121185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.121599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.121615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.121946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.122379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.122396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.122785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.123257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.123273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.123668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.124004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.124020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.124362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.124821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.124837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.125194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.125630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.125647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.126086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.126537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.126556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.126994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.127406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.127423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.127862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.128320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.128337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.128776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.129232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.129249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.129657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.130115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.130132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.130624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.131042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.131058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.131549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.131882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.131898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.132292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.132728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.132745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.133216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.133643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.133659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.134109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.134567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.134584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.134977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.135425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-05-15 16:06:44.135444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-05-15 16:06:44.135885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.136343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.136360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.136760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.137237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.137253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.137720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.138142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.138158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.138618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.138940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.138956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.139358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.139793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.139810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.140273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.140712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.140728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.141213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.141657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.141673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.142125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.142584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.142601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.143013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.143416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.143433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.143890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.144269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.144289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.144750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.145134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.145150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.145613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.146048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.146064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.146502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.146885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.146901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.147362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.147827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.147843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.148354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.148837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.148853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.149243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.149583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.149599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.149987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.150447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.150463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.150805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.151120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.151136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.151576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.151984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.152000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.152442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.152886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.152902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.153287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.153725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.153742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.154215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.154619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.154635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.155016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.155395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.155412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.155852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.156283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.156300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.156765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.157231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.157248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.157656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.157980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-05-15 16:06:44.157996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-05-15 16:06:44.158461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.158845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.158862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.159242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.159683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.159699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.160095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.160562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.160578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.160967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.161416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.161434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.161871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.162304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.162320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.162722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.163087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.163103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.163493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.163974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.163990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.164325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.164735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.164751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.165201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.165563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.165580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.166073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.166556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.166572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.167021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.167481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.167497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.167937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.168373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.168391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.168853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.169259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.169275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.169730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.170244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.170261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.170671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.171003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.171019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.171485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.171870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.171886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.172327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.172653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.172669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.173113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.173447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.173463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.173832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.174214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.174230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.174688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.175015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.175031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.175469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.175924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.175940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.176378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.176817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.176833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.177198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.177531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.177546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.178031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.178512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.178529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.179001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.179453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.179470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.723 qpair failed and we were unable to recover it. 00:28:45.723 [2024-05-15 16:06:44.179926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.723 [2024-05-15 16:06:44.180310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.180327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.180780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.181236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.181252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.181773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.182253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.182269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.182714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.183176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.183203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.183587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.183957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.183973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.184404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.184813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.184829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.185268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.185657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.185673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.186063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.186535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.186552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.187025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.187422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.187438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.187847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.188284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.188300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.188688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.189148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.189164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.189507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.189913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.189929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.190413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.190747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.190763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.191172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.191641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.191657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.192160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.192663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.192680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.193124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.193582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.193598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.194035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.194492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.194509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.194994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.195456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.195473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.195961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.196422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.196438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.196932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.197391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.197408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.197797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.198196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.198212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.198592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.199026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.199042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.199505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.199910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.199926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.200384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.200842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.200858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.201340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.201828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.201844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.202306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.202764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.724 [2024-05-15 16:06:44.202780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.724 qpair failed and we were unable to recover it. 00:28:45.724 [2024-05-15 16:06:44.203268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.203749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.203765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.204237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.204578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.204594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.204982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.205440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.205456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.205944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.206328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.206344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.206752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.207212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.207228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.207715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.208199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.208215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.208679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.209134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.209150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.209610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.209993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.210009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.210398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.210843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.210859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.211320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.211773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.211789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.212228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.212685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.212701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.213161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.213622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.213639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.214047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.214429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.214448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.214906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.215363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.215379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.215858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.216226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.216242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.216711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.217206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.217222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.217598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.218054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.218070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.218476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.218913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.218929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.219242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.219699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.219715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.220206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.220661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.220678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.221052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.221429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.221445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.725 qpair failed and we were unable to recover it. 00:28:45.725 [2024-05-15 16:06:44.221815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.725 [2024-05-15 16:06:44.224563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.224581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.225085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.225474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.225491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.225943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.226383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.226399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.226862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.227317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.227333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.227817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.228201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.228218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.228606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.229056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.229072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.229510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.229967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.229983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.230468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.230906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.230922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.231384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.231773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.231789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.232237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.232694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.232710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.233164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.233625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.233642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.234104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.234560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.234577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.234968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.235428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.235445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.235932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.236377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.236394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.236854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.237241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.237257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.237710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.238113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.238129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.238617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.239098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.239114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.239558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.239918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.239934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.240398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.240905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.240921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.241396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.241842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.241859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.242319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.242709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.242725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.243174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.243563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.243580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.243963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.244421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.244437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.244949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.245407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.245423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.245908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.246295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.246311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.726 [2024-05-15 16:06:44.246719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.247171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.726 [2024-05-15 16:06:44.247187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.726 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.247673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.248159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.248175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.248653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.249120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.249136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.249620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.250084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.250100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.250588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.250991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.251007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.251417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.251875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.251891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.252352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.252819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.252835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.253305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.253809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.253827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.254301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.254703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.254719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.255178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.255640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.255657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.256146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.256581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.256598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.257060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.257462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.257478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.257842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.258274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.258291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.258694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.259102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.259118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.259536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.259918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.259934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.260395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.260870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.260887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.261386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.261823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.261840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.262180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.262606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.262628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.262970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.263355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.263371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.263835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.264251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.264273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.264770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.265225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.727 [2024-05-15 16:06:44.265243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.727 qpair failed and we were unable to recover it. 00:28:45.727 [2024-05-15 16:06:44.265754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.266238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.266265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.994 qpair failed and we were unable to recover it. 00:28:45.994 [2024-05-15 16:06:44.266713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.267141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.267160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.994 qpair failed and we were unable to recover it. 00:28:45.994 [2024-05-15 16:06:44.267572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.267949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.267966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.994 qpair failed and we were unable to recover it. 00:28:45.994 [2024-05-15 16:06:44.268363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.268805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.268821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.994 qpair failed and we were unable to recover it. 00:28:45.994 [2024-05-15 16:06:44.269218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.269579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.269595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.994 qpair failed and we were unable to recover it. 00:28:45.994 [2024-05-15 16:06:44.270055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.270436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.270453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.994 qpair failed and we were unable to recover it. 00:28:45.994 [2024-05-15 16:06:44.270845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.994 [2024-05-15 16:06:44.271295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.271315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.271800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.272237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.272253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.272718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.273177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.273197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.273684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.274143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.274159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.274621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.275070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.275086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.275540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.275976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.275992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.276451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.276864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.276887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.277334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.277755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.277777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.278226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.278583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.278602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.278992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.279452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.279474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.279809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.280205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.280226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.280713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.281208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.281225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.281646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.282092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.282110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.282486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.282873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.282890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.283329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.283778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.283794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.284257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.284773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.284789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.285284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.285685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.285701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.286089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.286538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.286555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.286992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.287425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.287441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.287832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.288169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.288185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.288620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.289004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.289020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.289423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.289879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.289895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.995 [2024-05-15 16:06:44.290359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.290817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.995 [2024-05-15 16:06:44.290834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.995 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.291294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.291676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.291692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.292150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.292534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.292551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.293009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.293470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.293487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.293921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.294381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.294397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.294740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.295141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.295157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.295640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.296105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.296121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.296583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.297061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.297077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.297548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.297919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.297935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.298400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.298883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.298899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.299359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.299742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.299758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.300174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.300639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.300656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.300987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.301319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.301336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.301797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.302175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.302195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.302656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.303138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.303155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.303542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.304025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.304041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.304524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.304854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.304870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.305276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.305671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.305688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.306129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.306531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.306548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.307009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.307400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.307419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.307858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.308312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.308328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.996 qpair failed and we were unable to recover it. 00:28:45.996 [2024-05-15 16:06:44.308494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.996 [2024-05-15 16:06:44.308873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.308889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.309273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.309734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.309750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.310189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.310526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.310542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.310980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.311416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.311433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.311816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.312215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.312232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.312598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.312979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.312995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.313370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.313777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.313792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.314257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.314661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.314678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.314812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.315210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.315227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.315623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.315988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.316004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.316416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.316895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.316911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.317276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.317735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.317751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.318167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.318639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.318656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.319074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.319452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.319469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.319854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.320306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.320323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.320813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.321251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.321271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.321665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.322105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.322121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.322614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.322999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.323015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.323385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.323794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.323813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.324204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.324353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.324368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.324753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.325196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.325212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.325649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.325955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.325973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.326361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.326818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.326834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.327222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.327621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.997 [2024-05-15 16:06:44.327637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.997 qpair failed and we were unable to recover it. 00:28:45.997 [2024-05-15 16:06:44.327966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.328380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.328399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.328820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.329280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.329296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.329755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.330214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.330234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.330634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.331003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.331019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.331414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.331872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.331888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.332357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.332796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.332815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.333314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.333651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.333667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.334042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.334435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.334452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.334893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.335333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.335353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.335724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.336119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.336135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.336573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.336951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.336967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.337371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.337750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.337769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.338079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.338471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.338490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.338951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.339386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.339402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.339837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.340159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.340177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.340600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.341062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.341080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.341472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.341902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.341919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.342383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.342703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.342719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.343109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.343562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.343579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.344042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.344417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.344433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.344828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.345257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.345274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.345655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.346104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.346120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.998 [2024-05-15 16:06:44.346621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.347075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.998 [2024-05-15 16:06:44.347091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.998 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.347535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.347980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.347995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.348384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.348834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.348850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.349309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.349701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.349717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.350169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.350596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.350612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.351101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.351555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.351571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.351899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.352367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.352384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.352790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.353246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.353263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.353721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.354132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.354149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.354586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.354923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.354939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.355400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.355776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.355792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.356200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.356589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.356606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.356987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.357451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.357468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.357908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.358343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.358362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.358718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.359198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.359215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.359685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.360138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.360153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.360545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.360935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.360951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.361397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.361774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.361790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.362160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.362547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.362564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.363004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.363465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.363482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.363900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.364332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.364349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.364807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.365240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.365257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:45.999 qpair failed and we were unable to recover it. 00:28:45.999 [2024-05-15 16:06:44.365718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.366172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.999 [2024-05-15 16:06:44.366188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.366618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.366998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.367017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.367490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.367820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.367836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.368304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.368739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.368756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.369215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.369549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.369565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.369989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.370444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.370460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.370897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.371321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.371337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.371730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.372161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.372177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.372569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.372892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.372907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.373367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.373802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.373819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.374219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.374675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.374692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.375175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.375591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.375611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.376099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.376419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.376436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.376825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.377283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.377300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.377706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.378142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.378159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.378667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.379129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.379145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.379631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.380010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.380026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.380490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.380997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.381013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.381415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.381800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.000 [2024-05-15 16:06:44.381816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.000 qpair failed and we were unable to recover it. 00:28:46.000 [2024-05-15 16:06:44.382272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.382708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.382725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.383137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.383570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.383587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.384046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.384509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.384529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.384930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.385364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.385381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.385822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.386285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.386301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.386761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.387200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.387216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.387675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.388057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.388073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.388512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.388767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.388782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.389163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.389672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.389689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.390076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.390512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.390529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.390974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.391284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.391300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.391764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.392144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.392159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.392553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.392962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.392978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.393361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.393823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.393839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.394322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.394730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.394746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.395194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.395575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.395591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.396048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.396503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.396520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.396981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.397359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.397376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.397861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.398340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.398356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.398756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.399086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.399102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.399485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.399931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.399947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.001 qpair failed and we were unable to recover it. 00:28:46.001 [2024-05-15 16:06:44.400319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.001 [2024-05-15 16:06:44.400777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.400793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.401183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.401673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.401689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.402144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.402531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.402548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.402932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.403368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.403384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.403766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.404199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.404216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.404582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.404911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.404927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.405396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.405831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.405846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.406303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.406717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.406733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.407199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.407656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.407672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.408131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.408585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.408601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.409048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.409433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.409449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.409908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.410292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.410309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.410686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.411144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.411160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.411645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.412025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.412041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.412407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.412869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.412885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.413277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.413678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.413694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.414100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.414478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.414494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.414884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.415342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.415359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.415763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.416151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.416167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.416629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.416965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.416981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.417440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.417897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.417913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.418333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.418793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.418808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-05-15 16:06:44.419267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.419716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-05-15 16:06:44.419732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.420197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.420581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.420597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.420986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.421397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.421413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.421875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.422052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.422071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.422495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.422963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.422979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.423417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.423742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.423758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.424215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.424669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.424685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.425122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.425506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.425523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.425978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.426366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.426383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.426836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.427269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.427285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.427747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.428077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.428093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.428488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.428965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.428981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.429472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.429740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.429756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.430218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.430656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.430672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.431122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.431558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.431575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.431913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.432368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.432384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.432706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.433178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.433202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.433601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.434034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.434050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.434536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.434975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.434991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.435391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.435825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.435841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.436299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.436716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.436731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.437203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.437612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.437628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.438019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.438470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.438487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.438805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.439265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.439282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.439767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.440210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.440226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-05-15 16:06:44.440617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-05-15 16:06:44.441003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.441019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.441480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.441915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.441930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.442374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.442712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.442728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.443189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.443602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.443618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.444077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.444513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.444529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.444922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.445322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.445338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.445777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.446272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.446289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.446678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.447128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.447143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.447607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.447990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.448006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.448464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.448918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.448933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.449391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.449779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.449794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.450247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.450636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.450652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.450983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.451436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.451453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.451835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.452202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.452218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.452610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.452934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.452950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.453391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.453851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.453866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.454353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.454835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.454851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.455256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.455693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.455709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.456168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.456616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.456632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.457100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.457558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.457575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.457967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.458420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.458437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.458898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.459358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.459374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.459855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.460355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.460371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.460807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.461242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.461258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.461717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.462151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-05-15 16:06:44.462167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-05-15 16:06:44.462640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.463104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.463120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.463586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.464023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.464039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.464509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.465013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.465029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.465502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.465979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.465995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.466486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.466922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.466938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.467329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.467778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.467794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.468245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.468703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.468719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.469125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.469580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.469596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.470057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.470524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.470540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.470942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.471382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.471398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.471778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.472155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.472171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.472548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.473006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.473022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.473510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.473994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.474010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.474477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.474885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.474901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.475386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.475866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.475882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.476353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.476852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.476868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.477314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.477690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.477706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.478169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.478676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.478692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.479158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.479659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.479675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.480205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.480678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.480694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.481170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.481662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.481678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.482116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.482552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.482568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.483028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.483370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.483386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.483773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.484231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.484247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.484650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.485025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.485041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-05-15 16:06:44.485446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-05-15 16:06:44.485902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.485919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.486403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.486890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.486906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.487386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.487755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.487771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.488234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.488736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.488753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.489230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.489675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.489691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.490152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.490612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.490631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.491115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.491578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.491594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.492005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.492407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.492423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.492909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.493399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.493426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.493903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.494361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.494377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.494795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.495251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.495267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.495679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.496134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.496150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.496592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.497049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.497065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.497525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.497981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.497997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.498483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.498970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.498986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.499375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.499753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.499772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.500231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.500615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.500631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.501004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.501442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.501458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.501916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.502376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.502392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-05-15 16:06:44.502881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.503338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-05-15 16:06:44.503355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.503747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.504151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.504167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.504663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.505145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.505161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.505555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.506002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.506018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.506435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.506809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.506825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.507290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.507704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.507720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.508156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.508613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.508632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.509023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.509494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.509511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.509873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.510336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.510352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.510838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.511300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.511316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.511795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.512173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.512189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.512653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.513109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.513125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.513586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.514044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.514060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.514545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.514924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.514941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.515401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.515840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.515859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.516258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.516639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.516655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.517099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.517556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.517579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.518069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.518469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.518486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.518861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.519248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.519265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.519671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.520065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.520081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.520531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.520984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.521000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.521390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.521769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.521785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.522219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.522562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.522578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.522973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.523290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.523307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.523641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.524098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.524113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-05-15 16:06:44.524595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.525085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-05-15 16:06:44.525102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.525488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.525944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.525960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.526448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.526928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.526944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.527331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.527740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.527756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.528214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.528672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.528688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.529015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.529476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.529492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.529978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.530361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.530377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.530788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.531246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.531262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.531745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.532137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.532152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.532603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.533048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.533063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.533525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.533921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.533937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.534402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.534906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.534922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.535394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.535802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.535818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.536302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.536632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.536648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.537060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.537390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.537407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.537845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.538301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.538317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.538801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.539283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.539299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.539769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.540271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.540287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.540765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.541234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.541250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.541734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.542209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.542226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.542626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.543041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.543057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.543443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.543782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.543797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.544259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.544735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.544751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.545229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.545643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.545660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.546113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.546486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.546503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-05-15 16:06:44.546965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-05-15 16:06:44.547307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-05-15 16:06:44.547326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-05-15 16:06:44.547783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-05-15 16:06:44.548217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-05-15 16:06:44.548234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-05-15 16:06:44.548641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.549034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.549068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.549549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.549985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.550006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.550374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.550833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.550849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.551260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.551694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.551710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.552130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.552589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.552605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.553092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.553444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.553460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.553830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.554289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.554305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.554681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.555162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.555179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.555652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.555974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.555990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.556374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.556778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.556795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.557174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.557616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.557632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.558044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.558481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.558498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.558814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.559261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.559278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.559737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.560136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.560151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.560597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.561033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.561049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.274 [2024-05-15 16:06:44.561519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.561931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.274 [2024-05-15 16:06:44.561947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.274 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.562437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.562894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.562910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.563292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.563668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.563684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.564123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.564457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.564473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.564895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.565296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.565313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.565698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.566152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.566168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.566670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.567073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.567089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.567537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.567991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.568007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.568493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.568946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.568963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.569352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.569723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.569739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.570200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.570601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.570617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.570991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.571441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.571457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.571873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.572332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.572349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.572750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.573118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.573134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.573601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.574063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.574087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.574470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.574909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.574925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.575364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.575823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.575839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.576319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.576696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.576713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.577178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.577646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.577662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.578121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.578528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.578544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.579029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.579417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.579433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.579885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.580259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.580276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.580665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.581101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.581117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.581556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.582014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.582030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.582423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.582871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.582887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.583326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.583710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.583726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.275 qpair failed and we were unable to recover it. 00:28:46.275 [2024-05-15 16:06:44.584182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.584643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.275 [2024-05-15 16:06:44.584659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.585039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.585495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.585512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.585968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.586428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.586445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.586853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.587291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.587308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f74000b90 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.587726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.588202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.588221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.588641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.589120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.589136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.589551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.590011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.590027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.590536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.590997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.591013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.591500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.591901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.591917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.592358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.592813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.592829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.593210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.593646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.593662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.594099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.594503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.594519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.595005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.595444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.595460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.595863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.596307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.596324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.596759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.597097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.597113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.597496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.597951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.597966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.598409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.598806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.598822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.599208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.599668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.599685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.600160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.600534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.600551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.601040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.601370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.601386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.601849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.602229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.602245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.602708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.603168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.603184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.603676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.604131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.604147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.604590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.605009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.605025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.605461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.605867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.605886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.606277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.606724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.606740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.276 [2024-05-15 16:06:44.607127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.607562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.276 [2024-05-15 16:06:44.607578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.276 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.608036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.608420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.608436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.608771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.609203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.609220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.609660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.610116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.610132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.610584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.611039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.611055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.611540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.611981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.611998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.612383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.612786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.612802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.613203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.613603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.613619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.614078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.614468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.614484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.614944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.615399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.615415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.615802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.616257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.616273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.616659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.617115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.617131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.617616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.618022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.618038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.618444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.618875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.618891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.619354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.619809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.619826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.620214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.620623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.620639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.621088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.621446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.621462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.621834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.622301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.622318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.622775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.623080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.623095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.623505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.623915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.623931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.624416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.624806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.624822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.625227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.625615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.625630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.626006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.626476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.626492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.626965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.627365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.627381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.627821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.628198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.628215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.628609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.628952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.628968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.629454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.629933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.277 [2024-05-15 16:06:44.629949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.277 qpair failed and we were unable to recover it. 00:28:46.277 [2024-05-15 16:06:44.630420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.630874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.630891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.631353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.631753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.631769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.632230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.632639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.632655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.633051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.633497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.633513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.633950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.634407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.634424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.634831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.635287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.635303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.635720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.636173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.636189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.636680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.637129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.637145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.637511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.637973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.637989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.638378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.638827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.638844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.639232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.639667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.639683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.640144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.640527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.640544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.640917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.641385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.641402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.641857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.642313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.642329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.642716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.643170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.643186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.643604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.644038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.644054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.644535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.645022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.645039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.645505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.646009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.646025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.646518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.646923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.646939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.647378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.647761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.647777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.648163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.648620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.648636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.649119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.649607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.649624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.650088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.650548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.650567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.651053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.651499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.651515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.651978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.652368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.652385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.278 [2024-05-15 16:06:44.652838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.653273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.278 [2024-05-15 16:06:44.653289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.278 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.653748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.654067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.654083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.654475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.654902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.654919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.655381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.655835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.655851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.656314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.656749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.656764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.657169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.657584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.657600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.658085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.658489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.658505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.658989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.659428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.659444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.659904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.660362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.660378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.660768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.661238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.661254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.661737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.662119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.662135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.662593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.662971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.662987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.663450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.663910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.663926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.664363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.664748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.664763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.665127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.665561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.665578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.666041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.666499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.666515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.666953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.667409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.667425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.667814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.668264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.668280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.668657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.669094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.669111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.669548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.669949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.669965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.670330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.670793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.670808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.671231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.671664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-05-15 16:06:44.671681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.279 qpair failed and we were unable to recover it. 00:28:46.279 [2024-05-15 16:06:44.672119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.672521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.672537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.673025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.673343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.673358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.673818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.674276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.674292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.674730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.675126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.675142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.675538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.675901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.675917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.676349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.676749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.676765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.677200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.677682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.677698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.678162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.678674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.678692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.679215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.679648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.679664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.680121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.680501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.680518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.680933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.681308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.681325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.681791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.682200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.682216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.682701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.683139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.683155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.683616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.684073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.684089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.684510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.684972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.684988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.685379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.685822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.685838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.686300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.686665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.686684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.687147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.687606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.687622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.688107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.688588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.688605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.689070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.689473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.689489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.689976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.690411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.690427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.690890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.691323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.691339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.691716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.692173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.692188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.692682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.693074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.693090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.693541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.693926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.693942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.694395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.694854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.694870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.280 qpair failed and we were unable to recover it. 00:28:46.280 [2024-05-15 16:06:44.695360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-05-15 16:06:44.695746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.695762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.696235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.696690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.696706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.697167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.697630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.697647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.698058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.698512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.698528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.699012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.699496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.699512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.699980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.700479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.700495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.700961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.701427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.701452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.701927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.702388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.702404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.702840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.703273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.703290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.703750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.704149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.704165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.704660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.705096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.705112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.705551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.706008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.706024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.706465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.706920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.706936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.707372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.707830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.707846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.708303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.708763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.708779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.709265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.709752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.709768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.710241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.710732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.710748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.711211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.711624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.711640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.712024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.712478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.712495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.712956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.713391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.713407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.713777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.714231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.714247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.714735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.715220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.715236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.715704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.716176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.716202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.716614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.716974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.716990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.717451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.717778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.717794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.718252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.718712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.718728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.281 [2024-05-15 16:06:44.719212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.719677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-05-15 16:06:44.719693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.281 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.720174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.720644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.720660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.721056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.721500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.721516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.721931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.722333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.722349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.722835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.723246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.723261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.723654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.724132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.724148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.724565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.725001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.725017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.725477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.725936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.725952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.726438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.726830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.726846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.727146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.727580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.727596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.728034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.728411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.728427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.728738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.729201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.729217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.729700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.730162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.730178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.730668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.731045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.731062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.731521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.731977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.731993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.732480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.732964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.732983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.733424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.733858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.733874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.734337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.734770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.734786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.735171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.735627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.735643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.736012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.736471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.736487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.736873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.737333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.737350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.737783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.738240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.738257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.738697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.739152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.739167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.739581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.739996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.740012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.740470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.740904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.740920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.741380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.741833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.741850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.742229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.742636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.282 [2024-05-15 16:06:44.742653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.282 qpair failed and we were unable to recover it. 00:28:46.282 [2024-05-15 16:06:44.743135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.743515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.743531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.743992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.744447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.744463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.744837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.745207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.745223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.745683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.746141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.746157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.746644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.747128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.747144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.747516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.747980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.747996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.748480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.748962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.748978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.749451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.749845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.749862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.750266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3923286 Killed "${NVMF_APP[@]}" "$@" 00:28:46.283 [2024-05-15 16:06:44.750567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.750588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.751025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:46.283 [2024-05-15 16:06:44.751405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.751423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.283 [2024-05-15 16:06:44.751884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.752266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.752282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:46.283 [2024-05-15 16:06:44.752737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.283 [2024-05-15 16:06:44.753209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.753227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.753709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.754102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.754119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.754484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.754871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.754887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.755338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.755773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.755789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.756245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.756680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.756696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.757149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.757554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.757572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.758028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.758428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.758444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.758885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.759266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.759282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.759696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.760153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.760169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 [2024-05-15 16:06:44.760660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.760993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.761009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3924113 00:28:46.283 [2024-05-15 16:06:44.761337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3924113 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:46.283 [2024-05-15 16:06:44.761765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.761783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.283 qpair failed and we were unable to recover it. 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3924113 ']' 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.283 [2024-05-15 16:06:44.762158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:46.283 [2024-05-15 16:06:44.762625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.283 [2024-05-15 16:06:44.762643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.284 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:46.284 [2024-05-15 16:06:44.763118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 16:06:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.284 [2024-05-15 16:06:44.763564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.763582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.763940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.764401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.764418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.764822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.765288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.765306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.765770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.766257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.766275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.766755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.767228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.767244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.767635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.768062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.768079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.768534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.768919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.768935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.769372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.769809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.769825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.770283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.770619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.770635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.771095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.771420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.771436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.771772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.772201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.772217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.772694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.773100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.773116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.773549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.773949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.773966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.774414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.774806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.774822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.775270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.775650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.775666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.776127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.776567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.776584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.777046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.777500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.777518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.778006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.778340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.778356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.778748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.779140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.779156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.779575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.780012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.780028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.284 qpair failed and we were unable to recover it. 00:28:46.284 [2024-05-15 16:06:44.780420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.284 [2024-05-15 16:06:44.780870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.780886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.781287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.781628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.781647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.782132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.782569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.782587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.782979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.783427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.783444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.783835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.784246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.784263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.784706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.785026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.785042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.785430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.785761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.785777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.786242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.786632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.786648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.787101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.787540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.787557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.788017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.788399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.788415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.788797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.789034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.789051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.789331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.789647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.789663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.790059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.790446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.790462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.790917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.791356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.791373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.791868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.792003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.792019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.792503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.792682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.792698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.793037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.793518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.793534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.793861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.794297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.794314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.794757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.795237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.795254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.795734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.796056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.796072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.796557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.797009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.797025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.797465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.797855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.797872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.798361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.798738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.798754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.799100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.799474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.799490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.799937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.800329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.800345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.800786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.801224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.801241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.285 qpair failed and we were unable to recover it. 00:28:46.285 [2024-05-15 16:06:44.801635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.802035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.285 [2024-05-15 16:06:44.802051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.802520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.802958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.802974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.803299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.803762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.803777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.804097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.804554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.804570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.804795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.805255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.805273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.805595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.805985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.806001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644560 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.806032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x652140 (9): Bad file descriptor 00:28:46.286 [2024-05-15 16:06:44.806521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.806927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.806942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.807421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.807791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.807804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.808080] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:46.286 [2024-05-15 16:06:44.808131] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.286 [2024-05-15 16:06:44.808184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.808582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.808594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.809043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.809478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.809490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.809867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.810241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.810254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.810425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.810787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.810799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.811231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.811535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.811548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.811920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.812377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.812389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.812687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.813065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.813077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.813463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.813913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.813925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.814376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.814695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.814707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.815083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.815289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.815303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.815662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.816093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.816106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.816402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.816788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.816801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.817257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.817645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.817657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.818064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.818516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.818528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.819006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.819423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.819435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.819887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.820275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.820287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.820665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.821117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.286 [2024-05-15 16:06:44.821130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.286 qpair failed and we were unable to recover it. 00:28:46.286 [2024-05-15 16:06:44.821496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.821897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.821909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.822362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.822729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.822741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.823121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.823548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.823560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.823942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.824317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.824329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.824456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.824892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.824904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.825365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.825815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.825827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.826257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.826557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.826569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.826966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.827404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.827416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.827791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.828239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.828251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.828617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.829047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.829059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.287 [2024-05-15 16:06:44.829423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.829573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.287 [2024-05-15 16:06:44.829586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.287 qpair failed and we were unable to recover it. 00:28:46.553 [2024-05-15 16:06:44.830041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.830349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.830361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.553 qpair failed and we were unable to recover it. 00:28:46.553 [2024-05-15 16:06:44.830724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.831044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.831056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.553 qpair failed and we were unable to recover it. 00:28:46.553 [2024-05-15 16:06:44.831516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.831967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.831979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.553 qpair failed and we were unable to recover it. 00:28:46.553 [2024-05-15 16:06:44.832345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.832732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.832744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.553 qpair failed and we were unable to recover it. 00:28:46.553 [2024-05-15 16:06:44.833172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.833562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.553 [2024-05-15 16:06:44.833575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.833900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.834300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.834313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.834619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.834839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.834851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.835316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.835469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.835482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.835934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.836380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.836393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.836769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.837069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.837081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.837466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.837838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.837850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.838213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.838369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.838380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.838679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.839047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.839059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.839207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.839502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.839515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.839968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.840160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.840172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.840573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.840956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.840968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.841355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.841739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.841751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.841942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.842385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.842397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.842829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.843122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.843134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.843508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.843821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.843833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.844298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.844660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.844672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.845048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.845438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 [2024-05-15 16:06:44.845450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.554 qpair failed and we were unable to recover it. 00:28:46.554 [2024-05-15 16:06:44.845828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.554 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.555 [2024-05-15 16:06:44.846254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.846267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.846697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.847016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.847027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.847167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.847643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.847655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.848077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.848508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.848521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.848888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.849257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.849269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.849646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.849968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.849980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.850425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.850727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.850739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.851133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.851491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.851503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.851677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.851990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.852001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.852450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.852920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.852933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.853363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.853790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.853803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.854199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.854514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.854527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.854829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.855186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.855203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.855649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.856097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.856109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.856472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.856869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.856881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.857079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.857534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.857546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.857983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.858298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.858311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.858685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.858998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.859011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.555 [2024-05-15 16:06:44.859455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.859804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.555 [2024-05-15 16:06:44.859816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.555 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.860279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.860726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.860738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.861102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.861467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.861479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.861859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.862171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.862183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.862636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.863014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.863026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.863419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.863612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.863624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.863768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.864233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.864245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.864608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.864918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.864929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.865223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.865650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.865663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.866092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.866307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.866320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.866689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.867047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.867059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.867486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.867936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.867948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.868336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.868714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.868726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.869003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.869372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.869385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.869698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.870146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.870158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.870532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.870883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.870896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.871345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.871705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.871717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.871995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.872305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.872317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.872679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.873127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.873139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.556 [2024-05-15 16:06:44.873619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.873913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.556 [2024-05-15 16:06:44.873925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.556 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.874283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.874755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.874767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.875096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.875481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.875494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.875920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.876300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.876312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.876686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.876986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.876998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.877393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.877817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.877829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.878280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.878707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.878719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.879176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.879534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.879546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.879856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.880231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.880243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.880615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.880936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.880948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.881400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.881827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.881842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.882240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.882496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.882508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.882807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.883237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.883249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.883714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.884045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.884058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.884511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.884893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.884905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.885346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.885789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.885801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.886258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.886573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.886585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.886939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.887389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.887402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.887850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.888302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.888314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.557 [2024-05-15 16:06:44.888760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.889212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.557 [2024-05-15 16:06:44.889225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.557 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.889537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.889960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.889974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.890386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.890726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.890738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.891194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.891573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.891586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.891971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.892398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.892412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.892818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.893267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.893279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.893602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.894028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.894040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.894489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.894940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.894952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.895401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.895771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.895784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.896142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.896515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.896528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.896957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.897426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.897438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.897811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.898236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.898250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.898609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.899037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.899051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.899454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.899902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.899914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.900294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.900746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.900758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.901188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.901565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.901577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.901864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.902040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.558 [2024-05-15 16:06:44.902291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.902304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.902706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.903029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.903042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.558 qpair failed and we were unable to recover it. 00:28:46.558 [2024-05-15 16:06:44.903479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.558 [2024-05-15 16:06:44.903923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.903937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.904313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.904665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.904678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.905047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.905497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.905511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.905845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.906227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.906243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.906519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.906888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.906902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.907306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.907738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.907753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.907953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.908351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.908364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.908764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.909033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.909046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.909414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.909850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.909866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.910077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.910396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.910410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.910810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.911269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.911285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.911721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.912164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.912177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.912639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.912963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.912976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.913238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.913584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.913599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.913820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.914270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.914283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.914579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.914886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.914899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.915302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.915668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.915680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.916112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.916472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.916484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.916816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.917193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.559 [2024-05-15 16:06:44.917206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.559 qpair failed and we were unable to recover it. 00:28:46.559 [2024-05-15 16:06:44.917657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.917959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.917972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.918287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.918643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.918656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.919028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.919403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.919415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.919819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.920120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.920132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.920442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.920867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.920881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.921247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.921624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.921637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.922000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.922424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.922436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.922814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.923101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.923113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.923489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.923846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.923859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.924286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.924710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.924722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.925086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.925454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.925466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.925893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.926262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.926274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.926672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.927100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.927112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.927508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.927807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.927820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.928252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.928678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.928692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.929121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.929511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.929524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.560 qpair failed and we were unable to recover it. 00:28:46.560 [2024-05-15 16:06:44.929907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.560 [2024-05-15 16:06:44.930277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.930289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.930627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.931081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.931094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.931479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.931835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.931847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.932184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.932543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.932555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.932981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.933431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.933444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.933767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.934202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.934216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.934517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.934953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.934966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.935395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.935845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.935858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.936251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.936625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.936638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.937070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.937497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.937510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.937941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.938377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.938393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.938711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.938843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.938856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.939239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.939612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.939625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.940004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.940318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.940332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.940482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.940913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.940927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.941293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.941720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.941733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.942099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.942390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.942403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.942863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.943334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.943348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.943706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.944088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.944102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.944404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.944855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.944868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.945322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.945769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.945783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.946094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.946452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.946466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.946848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.947207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.947219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.947666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.948115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.948126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.948510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.948906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.948918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.949340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.949720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.949732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.561 qpair failed and we were unable to recover it. 00:28:46.561 [2024-05-15 16:06:44.950084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.561 [2024-05-15 16:06:44.950460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.950472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.950661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.951020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.951032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.951484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.951842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.951854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.952158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.952518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.952531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.952886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.953262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.953274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.953654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.954027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.954039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.954434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.954791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.954803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.955255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.955612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.955624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.956064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.956534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.956546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.956974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.957425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.957437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.957885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.958252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.958264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.958737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.959117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.959129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.959559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.959953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.959965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.960436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.960818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.960830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.961283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.961731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.961744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.962139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.962587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.962599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.962988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.963441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.963454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.963834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.964280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.964293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.964722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.965173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.965185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.965619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.965973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.965986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.966333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.966784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.966796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.967251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.967698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.967710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.968160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.968586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.968599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.969031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.969481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.969494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.969887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.970255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.970267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.562 qpair failed and we were unable to recover it. 00:28:46.562 [2024-05-15 16:06:44.970739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.562 [2024-05-15 16:06:44.971231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.971243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.971724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.972036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.972048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.972495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.972921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.972933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.973387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.973764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.973776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.974233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.974682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.974695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.975122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.975576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.975588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.976037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.976140] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.563 [2024-05-15 16:06:44.976172] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.563 [2024-05-15 16:06:44.976182] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.563 [2024-05-15 16:06:44.976195] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.563 [2024-05-15 16:06:44.976203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.563 [2024-05-15 16:06:44.976328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:46.563 [2024-05-15 16:06:44.976486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.976499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.976437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:46.563 [2024-05-15 16:06:44.976458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:46.563 [2024-05-15 16:06:44.976460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.563 [2024-05-15 16:06:44.976928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.977377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.977390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.977790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.978236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.978249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.978613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.978982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.978995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.979454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.979831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.979843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.980272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.980668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.980681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.981109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.981557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.981570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.981877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.982261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.982274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.982726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.983054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.983067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.983463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.983843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.983856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.984313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.984695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.984708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.985138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.985567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.985580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.986011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.986440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.986453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.986827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.987255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.987268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.987721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.988172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.988187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.988621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.989000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.989013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.989397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.989793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.989806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.990134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.990581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.563 [2024-05-15 16:06:44.990594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.563 qpair failed and we were unable to recover it. 00:28:46.563 [2024-05-15 16:06:44.991046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.991493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.991506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.991886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.992341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.992355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.992789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.993216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.993229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.993693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.994072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.994085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.994540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.994931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.994945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.995341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.995792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.995806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.996186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.996646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.996660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.997037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.997487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.997500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.997916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.998369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.998383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.998872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.999359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:44.999372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:44.999843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.000291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.000305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.000710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.001113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.001126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.001567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.002016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.002029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.002422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.002873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.002885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.003334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.003785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.003798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.004197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.004645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.004658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.005034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.005482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.005496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.005882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.006272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.006285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.006719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.007166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.007178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.007565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.007951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.007963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.008416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.008865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.008877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.009268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.009698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.009711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.010163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.010589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.010602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.010929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.011311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.011324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.011781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.012189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.012206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.012608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.013060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.564 [2024-05-15 16:06:45.013072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.564 qpair failed and we were unable to recover it. 00:28:46.564 [2024-05-15 16:06:45.013500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.013951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.013964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.014420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.014855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.014866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.015294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.015744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.015756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.016210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.016625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.016637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.017079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.017506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.017519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.017972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.018360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.018373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.018788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.019249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.019262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.019717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.020167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.020179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.020572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.021025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.021039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.021469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.021902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.021915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.022366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.022746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.022759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.023156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.023531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.023545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.023843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.024304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.024317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.024771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.025081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.025093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.025459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.025865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.025878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.026255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.026686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.026698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.027156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.027538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.027552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.027877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.028328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.028341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.028796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.029226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.029238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.029637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.030084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.030097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.030554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.030879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.030893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.565 [2024-05-15 16:06:45.031230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.031661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.565 [2024-05-15 16:06:45.031675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.565 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.032058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.032459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.032472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.032882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.033302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.033314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.033679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.034081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.034093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.034543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.034989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.035001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.035477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.035876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.035888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.036265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.036711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.036723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.037090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.037559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.037571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.037898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.038352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.038364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.038739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.039129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.039141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.039601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.039971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.039983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.040372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.040769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.040781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.041157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.041532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.041545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.041928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.042382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.042395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.042710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.043137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.043149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.043577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.044026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.044038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.044489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.044870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.044882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.045275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.045650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.045662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.046093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.046533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.046545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.046996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.047364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.047376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.047751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.048198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.048211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.048668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.049098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.049110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.049505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.049884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.049896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.050353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.050801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.050813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.051242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.051717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.051729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.566 qpair failed and we were unable to recover it. 00:28:46.566 [2024-05-15 16:06:45.052203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.052593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.566 [2024-05-15 16:06:45.052605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.053059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.053487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.053500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.053948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.054396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.054408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.054881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.055334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.055347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.055700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.056129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.056141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.056513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.056984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.056996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.057382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.057754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.057767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.058218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.058647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.058659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.059059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.059503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.059515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.059967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.060336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.060348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.060826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.061299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.061312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.061715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.062024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.062036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.062493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.062919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.062931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.063349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.063712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.063724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.064078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.064525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.064537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.064964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.065387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.065399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.065827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.066189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.066216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.066475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.066919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.066931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.067087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.067509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.067522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.067908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.068335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.068347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.068778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.069183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.069198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.069628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.069998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.070010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.070405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.070833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.070845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.071222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.071606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.071618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.071976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.072420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.072432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.567 qpair failed and we were unable to recover it. 00:28:46.567 [2024-05-15 16:06:45.072812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.073261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.567 [2024-05-15 16:06:45.073273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.073640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.074068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.074081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.074439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.074886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.074898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.075275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.075672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.075684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.076136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.076564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.076577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.076967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.077320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.077333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.077710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.078076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.078088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.078473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.078861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.078873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.079336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.079762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.079774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.080149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.080503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.080515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.080886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.081094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.081105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.081476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.081931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.081943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.082339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.082790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.082801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.083196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.083554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.083566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.083916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.084363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.084375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.084843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.085211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.085225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.085601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.086047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.086059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.086416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.086865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.086878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.087272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.087640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.087652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.088101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.088476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.088488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.088863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.089287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.089299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.089660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.090026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.090038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.090487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.090881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.090893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.091321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.091747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.091759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.568 qpair failed and we were unable to recover it. 00:28:46.568 [2024-05-15 16:06:45.092141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.568 [2024-05-15 16:06:45.092522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.092535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.092892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.093290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.093304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.093755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.094202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.094214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.094570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.094994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.095006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.095456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.095862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.095874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.096352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.096732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.096744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.097142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.097545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.097557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.097986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.098456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.098469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.098947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.099418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.099430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.099858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.100302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.100314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.100812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.101135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.101147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.101599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.102049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.102065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.102456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.102837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.102849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.103303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.103749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.103761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.104147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.104590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.104603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.104980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.105372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.105384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.105812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.106260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.106272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.106725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.107065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.107077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.107471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.107915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.107927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.569 [2024-05-15 16:06:45.108326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.108719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.569 [2024-05-15 16:06:45.108731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.569 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.109199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.109647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.109658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.110087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.110540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.110554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.111007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.111359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.111371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.111823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.112248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.112260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.112620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.113093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.113105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.113580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.114031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.114043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.114496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.114945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.114957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.115399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.115850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.115862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.116217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.116574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.116586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.117035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.117459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.117488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.117921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.118372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.118384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.118836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.119205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.119217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.119696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.120143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.120155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.120584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.121031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.121043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.121498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.121922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.121934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.122321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.122779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.122790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.123242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.123690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.123702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.124155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.124604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.124616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.125067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.125515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.125528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.125979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.126404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.126416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.126876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.127299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.835 [2024-05-15 16:06:45.127311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.835 qpair failed and we were unable to recover it. 00:28:46.835 [2024-05-15 16:06:45.127698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.128125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.128137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.128593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.129019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.129031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.129441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.129877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.129889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.130316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.130762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.130774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.131088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.131518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.131530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.131985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.132481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.132493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.132974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.133291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.133303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.133754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.134200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.134212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.134584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.134962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.134974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.135434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.135863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.135874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.136257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.136706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.136718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.137170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.137552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.137564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.138021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.138470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.138482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.138931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.139318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.139330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.139759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.140210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.140222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.140676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.141123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.141135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.141498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.141895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.141907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.142354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.142803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.142815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.143262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.143585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.143596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.144022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.144474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.144486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.144939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.145382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.145394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.145772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.146229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.146241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.146694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.147142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.147154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.147580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.148001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.148013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.148489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.148920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.148932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.149384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.149778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.149790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.150226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.150652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.150664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.151047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.151499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.151511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.836 qpair failed and we were unable to recover it. 00:28:46.836 [2024-05-15 16:06:45.151965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.836 [2024-05-15 16:06:45.152415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.152427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.152880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.153329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.153341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.153770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.154220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.154232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.154684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.155135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.155147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.155598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.155957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.155970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.156323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.156773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.156785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.157238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.157691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.157703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.158156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.158582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.158594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.158952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.159341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.159353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.159806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.160250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.160262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.160643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.161069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.161081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.161456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.161918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.161930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.162409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.162882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.162894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.163275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.163702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.163714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.164095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.164523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.164535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.164984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.165410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.165422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.165795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.166251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.166263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.166711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.167158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.167169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.167502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.167878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.167890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.168259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.168687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.168699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.169149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.169620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.169633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.169994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.170392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.170405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.170859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.171287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.171299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.171706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.172076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.172088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.172444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.172920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.172933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.173377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.173732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.173744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.174231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.174679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.174692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.175049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.175429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.175441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.175820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.176289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.176303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.837 qpair failed and we were unable to recover it. 00:28:46.837 [2024-05-15 16:06:45.176655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.177057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.837 [2024-05-15 16:06:45.177069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.177523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.177845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.177856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.178213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.178594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.178606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.179056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.179435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.179448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.179877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.180324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.180337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.180718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.181180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.181195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.181576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.182003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.182015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.182443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.182828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.182840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.183300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.183754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.183766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.184212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.184609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.184621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.184946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.185321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.185333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.185788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.186238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.186250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.186697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.187125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.187137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.187567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.187977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.187988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.188449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.188830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.188843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.189214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.189675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.189687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.190077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.190546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.190558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.190924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.191285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.191298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.191628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.192063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.192075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.192574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.193035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.193047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.193474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.193900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.193912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.194267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.194623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.194635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.195017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.195465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.195477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.195932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.196288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.196300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.196682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.197131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.197143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.197503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.197929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.197941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.198371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.198709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.198722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.838 [2024-05-15 16:06:45.199171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.199493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.838 [2024-05-15 16:06:45.199505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.838 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.199880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.200340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.200353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.200802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.201257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.201269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.201659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.202038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.202050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.202435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.202809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.202822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.203279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.203670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.203682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.204113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.204533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.204546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.205022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.205401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.205414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.205868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.206255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.206268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.206728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.207199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.207211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.207665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.208083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.208095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.208540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.208993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.209005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.209434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.209771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.209783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.210230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.210546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.210558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.210966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.211413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.211426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.211814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.212269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.212281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.212733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.213205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.213217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.213668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.214076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.214089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.214545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.214942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.214955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.215404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.215853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.215866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.216297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.216680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.216692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.217162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.217627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.217639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.839 [2024-05-15 16:06:45.217976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.218425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.839 [2024-05-15 16:06:45.218437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.218813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.219239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.219253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.219633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.220060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.220072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.220518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.220950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.220963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.221361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.221806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.221818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.222147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.222518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.222532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.222915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.223242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.223254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.223638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.224044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.224056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.224532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.224903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.224916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.225318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.225696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.225708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.226096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.226475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.226487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.226869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.227315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.227328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.227737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.228111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.228123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.228581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.228919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.228932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.229401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.229725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.229738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.230072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.230505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.230521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.230907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.231283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.231296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.231691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.232074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.232086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.232520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.232945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.232959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.233340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.233712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.233725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.234178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.234586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.234598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.235029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.235479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.235491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.235816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.236222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.236235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.236685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.237089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.237101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.237535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.237966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.237978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.238436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.238808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.238822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.239280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.239730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.239742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-05-15 16:06:45.240150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-05-15 16:06:45.240535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.240548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.240944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.241414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.241427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.241823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.242126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.242138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.242570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.242952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.242963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.243391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.243742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.243755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.244129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.244582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.244594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.244988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.245440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.245454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.245793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.246290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.246303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.246756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.247261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.247275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.247755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.248231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.248243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.248694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.249159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.249171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.249577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.250013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.250025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.250457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.250904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.250916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.251368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.251797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.251810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.252258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.252683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.252695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.253131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.253578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.253590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.254046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.254481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.254494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.254876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.255324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.255337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.255719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.256069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.256081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.256487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.256856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.256868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.257241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.257641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.257653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.258050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.258519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.258532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.258959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.259387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.259400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.259776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.260204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.260217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.260668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.261160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.261172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.261620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.262024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.262035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.262428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.262881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.262893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.263267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.263619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.263631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.264011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.264440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.841 [2024-05-15 16:06:45.264453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.841 qpair failed and we were unable to recover it. 00:28:46.841 [2024-05-15 16:06:45.264881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.265248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.265261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.265628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.266105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.266117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.266587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.266908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.266919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.267294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.267667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.267679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.268123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.268542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.268554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.268975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.269403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.269416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.269798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.270254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.270266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.270715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.271107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.271119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.271446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.271894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.271906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.272290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.272668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.272680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.273007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.273319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.273331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.273708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.274204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.274217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.274599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.274995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.275007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.275375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.275800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.275813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.276126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.276543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.276555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.276938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.277357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.277369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.277757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.278236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.278249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.278696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.279041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.279053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.279378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.279780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.279791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.280087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.280530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.280543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.280893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.281261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.281274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.281597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.281974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.281986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.282312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.282681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.282693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.283159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.283482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.283494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.283886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.284290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.284302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.284668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.284990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-05-15 16:06:45.285002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.842 qpair failed and we were unable to recover it. 00:28:46.842 [2024-05-15 16:06:45.285429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.285760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.285773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.286227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.286620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.286632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.286943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.287315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.287328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.287809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.288189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.288214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.288539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.288920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.288932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.289368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.289686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.289698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.290160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.290511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.290524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.290902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.291307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.291319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.291693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.292116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.292128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.292559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.292928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.292941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.293373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.293751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.293763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.294237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.294563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.294576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.295005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.295387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.295400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.295798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.296155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.296168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.296548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.296928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.296941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.297318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.297704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.297716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.298045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.298504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.298516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.298932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.299296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.299309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.299735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.300169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.300181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.300666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.301063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.301075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.301549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.301929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.301941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.302302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.302679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.302690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.303089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.303533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.303546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.303975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.304423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.304435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.304766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.305230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.305243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.305672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.306062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.306074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.306553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.306951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.306963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.307365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.307732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.307745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.843 [2024-05-15 16:06:45.308186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.308623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-05-15 16:06:45.308636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.843 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.309028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.309349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.309362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.309782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.310212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.310223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.310691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.311163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.311176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.311633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.312013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.312025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.312462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.312888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.312901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.313217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.313542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.313554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.314005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.314464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.314476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.314807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.315197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.315210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.315662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.316109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.316121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.316567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.316891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.316903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.317269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.317661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.317673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.318095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.318404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.318416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.318794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.319228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.319240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.319620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.320053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.320064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.320493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.320941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.320953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.321383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.321716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.321729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.322179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.322624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.322636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.323092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.323480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.323492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.323866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.324277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.324290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.324670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.325113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.325125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.325504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.325968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.325979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.326338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.326639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.326651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.327080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.327493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.327505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.844 [2024-05-15 16:06:45.327899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.328331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-05-15 16:06:45.328343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.844 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.328730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.329100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.329113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.329572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.329869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.329882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.330153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.330553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.330566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.330997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.331309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.331322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.331594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.332114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.332126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.332566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.332944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.332956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.333292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.333499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.333512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.333674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.334071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.334083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.334407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.334798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.334811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.335113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.335490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.335502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.335806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.336132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.336144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.336512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.336891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.336903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.337290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.337648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.337660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.338130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.338515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.338528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.338955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.339384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.339396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.339757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.340074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.340086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.340539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.340941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.340953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.341277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.341680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.341691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.342058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.342511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.342523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.342847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.343231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.343243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.343706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.344064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.344076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.344370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.344827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.344839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.345238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.345611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.345623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.345980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.346338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.346350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.346720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.347079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.347091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.347520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.347948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.347960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.348436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.348821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.348833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.349234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.349606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.349618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.349922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.350349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.350361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.845 qpair failed and we were unable to recover it. 00:28:46.845 [2024-05-15 16:06:45.350760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.845 [2024-05-15 16:06:45.351129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.351141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.351444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.351817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.351829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.352148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.352516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.352532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.352928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.353374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.353386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.353759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.354224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.354237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.354615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.355040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.355053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.355369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.355765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.355777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.356267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.356731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.356743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.357158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.357591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.357604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.357997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.358368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.358380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.358807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.359198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.359212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.359667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.360098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.360110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.360547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.360938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.360953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.361279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.361679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.361691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.362144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.362551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.362563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.362947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.363316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.363328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.363685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.364094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.364106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.364563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.364972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.364983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.365441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.365771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.365784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.366236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.366688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.366700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.367201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.367609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.367621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.368050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.368439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.368452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.368786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.369213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.369227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.369698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.370137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.370149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.370583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.371011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.371023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.371471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.371774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.371786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.372237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.372565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.372577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.373003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.373452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.373465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.373731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.374180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.374195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.846 [2024-05-15 16:06:45.374654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.375109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.846 [2024-05-15 16:06:45.375121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.846 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.375572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.376020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.376032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.376483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.376815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.376827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.377254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.377638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.377650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.378114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.378547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.378559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.378944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.379397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.379410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.379840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.380277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.380289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.380667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.380954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.380966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.381360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.381754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.381766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.382157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.382546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.382558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.382925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.383372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.383384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.383711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.384087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.384099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.384533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.384959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.384972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.385420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.385825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.385837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.386276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.386651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.386663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.386975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.387396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.387408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.387778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.388219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.388231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.388634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.389059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.389071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.389490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.389935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.389947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:46.847 [2024-05-15 16:06:45.390364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.390792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.847 [2024-05-15 16:06:45.390804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:46.847 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.391236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.391554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.391566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.391967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.392405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.392418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.392797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.393250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.393263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.393699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.394151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.394163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.394536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.394918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.394929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.395378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.395753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.395765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.396204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.396655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.396667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.397162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.397543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.397555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.397944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.398262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.398274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.398611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.399050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.399062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.399437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.399742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.399753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.400073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.400508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.400521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.400857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.401322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.401334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.401746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.402224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.402236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.402627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.403028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.403041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.403440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.403867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.403879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.404336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.404783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.404796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.405173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.405712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.405725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.406106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.406581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.406593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.407079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.407464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.407477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.407849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.408297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.408309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.408764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.409240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.113 [2024-05-15 16:06:45.409252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.113 qpair failed and we were unable to recover it. 00:28:47.113 [2024-05-15 16:06:45.409729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.410177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.410189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.410599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.411027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.411038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.411436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.411840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.411852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.412318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.412616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.412629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.412983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.413410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.413423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.413805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.414261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.414274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.414636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.414989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.415001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.415460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.415905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.415917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.416366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.416744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.416756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.417221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.417716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.417728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.418207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.418635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.418647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.419043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.419428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.419440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.419824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.420226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.420238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.420681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.421082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.421093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.421422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.421868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.421880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.422204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.422562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.422574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.422984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.423373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.423385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.423766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.424230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.424242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.424570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.424975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.424987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.425439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.425818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.425830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.426205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.426606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.426618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.426992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.427369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.427382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.427767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.428073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.428085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.428535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.428864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.428877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.429325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.429795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.429807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.114 [2024-05-15 16:06:45.430309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.430630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.114 [2024-05-15 16:06:45.430642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.114 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.431036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.431485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.431498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.431895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.432341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.432354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.432751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.433235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.433248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.433644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.434028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.434040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.434498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.434897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.434909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.435307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.435735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.435747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.436213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.436649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.436661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.437041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.437495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.437508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.437842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.438211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.438224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.438545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.438997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.439010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.439415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.439798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.439811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.440244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.440645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.440658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.441057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.441503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.441516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.441888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.442361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.442374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.442691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.443020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.443033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.443419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.443797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.443809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.444264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.444572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.444584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.444981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.445342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.445355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.445738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.446116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.446129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.446527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.446907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.446919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.447295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.447679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.447692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.448081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.448457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.448470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.448856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.449306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.449319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.449713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.450078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.450091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.115 [2024-05-15 16:06:45.450455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.450788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.115 [2024-05-15 16:06:45.450800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.115 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.451250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.451570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.451582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.451919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.452376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.452389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.452747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.453238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.453250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.453677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.454061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.454073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.454540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.454939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.454952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.455328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.455692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.455704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.456169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.456574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.456587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.456968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.457419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.457432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.457806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.458083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.458096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.458510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.458907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.458920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.459373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.459787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.459800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.460181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.460551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.460563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.460944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.461306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.461320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.461700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.462071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.462083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.462454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.462853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.462865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.463311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.463697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.463710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.464188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.464683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.464696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.465030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.465456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.465468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.465897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.466261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.466274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.466475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.466799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.466811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.467135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.467492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.467504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.467667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.468043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.468055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.468440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.468823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.468835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.469176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.469309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.469321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.469643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.470011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.116 [2024-05-15 16:06:45.470023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.116 qpair failed and we were unable to recover it. 00:28:47.116 [2024-05-15 16:06:45.470408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.470724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.470737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.471125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.471451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.471463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.471822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.472122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.472134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.472432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.472867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.472879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.473337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.473666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.473678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.474001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.474369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.474381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.474590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.474780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.474795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.475119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.475502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.475515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.475892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.476315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.476327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.476759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.476965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.476977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.477353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.477719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.477731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.478083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.478409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.478421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.478734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.479040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.479053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.479450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.479756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.479768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.480076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.480382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.480394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.480767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.481210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.481222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.481652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.481973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.481987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.482278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.482492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.482503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.482930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.483313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.483325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.483759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.484203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.484216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.484517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.484947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.484960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.485270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.485646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.485658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.486026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.486382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.486395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.486701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.487078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.487091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.487465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.487831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.487842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.117 qpair failed and we were unable to recover it. 00:28:47.117 [2024-05-15 16:06:45.488166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.117 [2024-05-15 16:06:45.488618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.488631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.489011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.489444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.489457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.489861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.490243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.490256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.490651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.491019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.491031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.491483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.491649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.491661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.492046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.492415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.492428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.492827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.493183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.493200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.493576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.493875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.493887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.494330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.494641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.494653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.495024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.495160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.495173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.495566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.495939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.495951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.496345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.496664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.496678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.497126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.497557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.497569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.497950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.498333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.498345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.498670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.498830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.498842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.499214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.499588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.499600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.499953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.500379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.500391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.500780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.501207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.501220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.118 [2024-05-15 16:06:45.501646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.501950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.118 [2024-05-15 16:06:45.501963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.118 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.502324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.502703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.502715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.503024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.503478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.503491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.503706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.504018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.504030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.504528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.504967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.504979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.505290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.505721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.505733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.506123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.506522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.506535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.506850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.507219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.507232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.507609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.507921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.507933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.508311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.508687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.508700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.509133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.509505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.509518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.509885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.510209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.510222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.510607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.510971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.510983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.511392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.511756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.511768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.512173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.512494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.512506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.512937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.513309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.513321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.513701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.514084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.514096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.514383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.514693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.514705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.515071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.515405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.515417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.515802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.516179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.516197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.516588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.516954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.516967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.517348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.517676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.517688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.518063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.518376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.518389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.518701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.519091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.519103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.119 qpair failed and we were unable to recover it. 00:28:47.119 [2024-05-15 16:06:45.519468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.119 [2024-05-15 16:06:45.519854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.519866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.520160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.520485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.520498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.520795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.521166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.521180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.521512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.521955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.521967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.522282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.522648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.522662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.522957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.523292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.523304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.523749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.523911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.523923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.524145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.524493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.524506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.524896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.525325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.525338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.525652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.525960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.525972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.526131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.526501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.526514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.526902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.527279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.527291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.527665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.527983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.527995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.528381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.528802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.528814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.529046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.529362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.529374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.529727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.530043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.530056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.530418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.530778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.530790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.531142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.531525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.531537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.531969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.532299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.532312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.532619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.532998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.533010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.533446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.533895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.533908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.534204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.534596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.534608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.534907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.535374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.535386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.535757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.536075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.536087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.536537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.536928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.120 [2024-05-15 16:06:45.536940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.120 qpair failed and we were unable to recover it. 00:28:47.120 [2024-05-15 16:06:45.537300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.537680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.537692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.538022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.538421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.538434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.538817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.539242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.539256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.539559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.539944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.539956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.540320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.540638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.540650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.541105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.541416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.541429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.541740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.542065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.542077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.542458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.542829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.542841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.543214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.543673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.543686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.544089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.544401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.544413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.544782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.545087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.545100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.545464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.545833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.545846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.546299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.546543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.546556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.546870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.547237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.547249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.547611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.547982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.547995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.548448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.548828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.548840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.549245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.549549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.549561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.549895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.550224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.550237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.550619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.550985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.550998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.551382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.551738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.551751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.552075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.552462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.552476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.552849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.553220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.553233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.553531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.553957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.553970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.554349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.554679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.554692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.555079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.555460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.555472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.121 qpair failed and we were unable to recover it. 00:28:47.121 [2024-05-15 16:06:45.555870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.121 [2024-05-15 16:06:45.556188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.556204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.556506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.556877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.556889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.557348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.557652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.557664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.558042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.558180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.558196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.558651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.558792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.558803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.559179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.559566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.559578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.559772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.560162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.560174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.560489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.560922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.560934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.561370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.561730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.561743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.562069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.562445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.562457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.562762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.563212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.563224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.563595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.563880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.563893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.564221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.564585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.564597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.564896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.565216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.565228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.565594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.566037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.566049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.566364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.566652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.566665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.566977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.567195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.567208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.567661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.568039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.568051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.568364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.568505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.568518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.568898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.569203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.569220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.569531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.569831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.569843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.570146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.570578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.570590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.122 qpair failed and we were unable to recover it. 00:28:47.122 [2024-05-15 16:06:45.570873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.122 [2024-05-15 16:06:45.571243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.571256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.571630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.571950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.571962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.572276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.572652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.572664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.573018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.573382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.573395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.573791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.574159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.574172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.574559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.574843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.574855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.575181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.575501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.575514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.575914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.576215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.576228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.576605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.576966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.576977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.577352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.577660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.577672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.578040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.578404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.578417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.578820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.579205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.579218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.579531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.579908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.579921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.580241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.580553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.580565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.580931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.581237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.581249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.581557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.581888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.581900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.582288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.582659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.582671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.582986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.583369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.583381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.583547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.583939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.583951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.584336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.584701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.123 [2024-05-15 16:06:45.584713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.123 qpair failed and we were unable to recover it. 00:28:47.123 [2024-05-15 16:06:45.585043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.585421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.585435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.585772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.586092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.586104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.586422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.586676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.586688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.587131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.587454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.587467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.587761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.588046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.588058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.588359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.588722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.588735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.589106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.589463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.589476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.589837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.590145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.590157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.590528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.590901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.590915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.591234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.591706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.591719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.592156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.592606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.592619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.592999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.593300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.593312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.593742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.594117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.594130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.594278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.594646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.594658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.594971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.595334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.595346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.595664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.596118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.596131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.596449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.596804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.596816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.597227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.597587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.597599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.598028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.598480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.598495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.598805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.599109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.599121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.599511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.599886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.599898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.600209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.600416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.600429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.600575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.600913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.600925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.601301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.601602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.601614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.124 [2024-05-15 16:06:45.601937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.602240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.124 [2024-05-15 16:06:45.602252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.124 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.602631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.602918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.602931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.603299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.603668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.603680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.604080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.604408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.604420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.604741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.605105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.605120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.605434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.605809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.605821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.606254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.606576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.606588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.607015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.607341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.607353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.607727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.608036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.608050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.608424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.608839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.608851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.609280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.609579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.609592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.609978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.610430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.610442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.610766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.611201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.611213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.611669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.611981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.611993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.612359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.612683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.612697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.613110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.613469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.613481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.613911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.614277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.614289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.614647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.614998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.615010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.615307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.615730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.615741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.616069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.616453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.616465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.616881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.617236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.617248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.617630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.618000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.618011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.618463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.618915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.618927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.619383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.619886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.619897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.620380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.620690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.620702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.125 qpair failed and we were unable to recover it. 00:28:47.125 [2024-05-15 16:06:45.621039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.621413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.125 [2024-05-15 16:06:45.621425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.621859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.622302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.622314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.622489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.622820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.622832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.623234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.623558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.623569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.623999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.624385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.624397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.624762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.625060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.625071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.625424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.625794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.625805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.626178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.626559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.626571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.626830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.627288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.627300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.627458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.627833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.627844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.628231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.628658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.628671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.628997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.629444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.629457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.629836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.630282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.630294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.630675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.631036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.631047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.631381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.631762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.631774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.632234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.632608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.632620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.633096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.633536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.633548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.633947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.634324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.634336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.634710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:47.126 [2024-05-15 16:06:45.635164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.635178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:47.126 [2024-05-15 16:06:45.635413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.126 [2024-05-15 16:06:45.635792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.635806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.126 [2024-05-15 16:06:45.636126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.126 [2024-05-15 16:06:45.636581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.636594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.637064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.637511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.637525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.637910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.638366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.638378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.638785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.639244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.126 [2024-05-15 16:06:45.639256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.126 qpair failed and we were unable to recover it. 00:28:47.126 [2024-05-15 16:06:45.639592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.639904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.639915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.640234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.640655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.640667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.641046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.641501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.641515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.641902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.642325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.642338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.642745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.643219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.643234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.643618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.643947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.643959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.644388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.644703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.644715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.645164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.645547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.645560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.645950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.646406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.646418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.646805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.647185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.647200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.647578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.647929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.647942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.648398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.648779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.648791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.649172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.649580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.649592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.649971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.650425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.650438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.650779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.651225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.651240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.651622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.651943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.651954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.652384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.652763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.652774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.653246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.653580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.653593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.653976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.654267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.654281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.654610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.654976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.654988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.655356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.655660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.655672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.656097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.656541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.656553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.656940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.657391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.657403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.657734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.658110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.658123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.127 qpair failed and we were unable to recover it. 00:28:47.127 [2024-05-15 16:06:45.658494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.127 [2024-05-15 16:06:45.658824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.658838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.659247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.659576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.659589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.659923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.660365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.660378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.660809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.661186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.661202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.661601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.661934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.661946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.662400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.662783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.662795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.663262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.663640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.663652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.664111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.664515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.664528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.664902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.665353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.665365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.665745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.666121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.666132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.666595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.666909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.666924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.667379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.667782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.128 [2024-05-15 16:06:45.667795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.128 qpair failed and we were unable to recover it. 00:28:47.128 [2024-05-15 16:06:45.668242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.668560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.668573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.390 qpair failed and we were unable to recover it. 00:28:47.390 [2024-05-15 16:06:45.669031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.669481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.669493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.390 qpair failed and we were unable to recover it. 00:28:47.390 [2024-05-15 16:06:45.669882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.670333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.670345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.390 qpair failed and we were unable to recover it. 00:28:47.390 [2024-05-15 16:06:45.670727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.671146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.671158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.390 qpair failed and we were unable to recover it. 00:28:47.390 [2024-05-15 16:06:45.671501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.671875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.671888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.390 qpair failed and we were unable to recover it. 00:28:47.390 [2024-05-15 16:06:45.672267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.672600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.672611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.390 qpair failed and we were unable to recover it. 00:28:47.390 [2024-05-15 16:06:45.673055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.390 [2024-05-15 16:06:45.673497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.673509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.674006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.674439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.674451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.674774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.675257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.675271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.675601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.675930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.675942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.676392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.676705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.676717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.677132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.677542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.677554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.677935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.678386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.678399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.678781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.679272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.679284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.679759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.680246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.680259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.680580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.681009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.681021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.391 [2024-05-15 16:06:45.681478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.391 [2024-05-15 16:06:45.681862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.391 [2024-05-15 16:06:45.681875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.391 [2024-05-15 16:06:45.682307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.682708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.682719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.683137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.683573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.683585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.683911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.684320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.684332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.684669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.685071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.685083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.685412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.685730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.685742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.686132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.686540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.686552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.686882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.687326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.687339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.687678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.688070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.688082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.688516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.688898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.688910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.689317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.689712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.689724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.690174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.690553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.690567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.690958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.691337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.691350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.691764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.692173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.692185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.391 qpair failed and we were unable to recover it. 00:28:47.391 [2024-05-15 16:06:45.692558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.391 [2024-05-15 16:06:45.692915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.692927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.693305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.693675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.693687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.694079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.694560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.694573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.694911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.695364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.695377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.695806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.696189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.696206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.696594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.697004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.697017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.697483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.697913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.697925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.698337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.698719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.698737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.699249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.699615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.699628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.700004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.700329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.700342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.700771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.701228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.701241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.701663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.702000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.702012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.702460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.702883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.702895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 Malloc0 00:28:47.392 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.392 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:47.392 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.392 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.392 [2024-05-15 16:06:45.703697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.704110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.704126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.704613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.705105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.705117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.705502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.705883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.705896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.706347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.706694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.392 [2024-05-15 16:06:45.706775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.706788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.707219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.707624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.707636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.708113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.708549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.708561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.708959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.709408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.709420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.709819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.710180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.710197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.710591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.710970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.710982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.392 qpair failed and we were unable to recover it. 00:28:47.392 [2024-05-15 16:06:45.711400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.392 [2024-05-15 16:06:45.711830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.711841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.712272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.712679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.712691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.712997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.713391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.713402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.713784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.714240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.714252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.714575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.393 [2024-05-15 16:06:45.715295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.715314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.715806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.716183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.716199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.716598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.716919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.716931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.717392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.717717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.717729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.718237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.718568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.718579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.718910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.719359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.719372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.719835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.720252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.720264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.720645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.721018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.721030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.721429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.721878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.721890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.722252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.722726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.722738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.723129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.393 [2024-05-15 16:06:45.723585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.723597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.393 [2024-05-15 16:06:45.724023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.393 [2024-05-15 16:06:45.724417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.393 [2024-05-15 16:06:45.724430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.724877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.725337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.725349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.725792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.726168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.726179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.726518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.726944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.726956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.727408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.727805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.727816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.728277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.728604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.728617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.729054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.729494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.729509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.393 [2024-05-15 16:06:45.729999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.730500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.393 [2024-05-15 16:06:45.730512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.393 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.730896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.731360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.731373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.394 [2024-05-15 16:06:45.731757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.394 [2024-05-15 16:06:45.732167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.732179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.394 [2024-05-15 16:06:45.732572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.732945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.732956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.733409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.733835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.733847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.734292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.734712] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:47.394 [2024-05-15 16:06:45.734743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.394 [2024-05-15 16:06:45.734754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f6c000b90 with addr=10.0.0.2, port=4420 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.734950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.394 [2024-05-15 16:06:45.737335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.394 [2024-05-15 16:06:45.737511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.394 [2024-05-15 16:06:45.737534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.394 [2024-05-15 16:06:45.737545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.394 [2024-05-15 16:06:45.737557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.394 [2024-05-15 16:06:45.737580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.394 [2024-05-15 16:06:45.747291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.394 [2024-05-15 16:06:45.747417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.394 [2024-05-15 16:06:45.747437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.394 [2024-05-15 16:06:45.747448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.394 [2024-05-15 16:06:45.747457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.394 [2024-05-15 16:06:45.747478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.394 16:06:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3923521 00:28:47.394 [2024-05-15 16:06:45.757335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.394 [2024-05-15 16:06:45.757457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.394 [2024-05-15 16:06:45.757476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.394 [2024-05-15 16:06:45.757486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.394 [2024-05-15 16:06:45.757495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.394 [2024-05-15 16:06:45.757515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.767241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.394 [2024-05-15 16:06:45.767371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.394 [2024-05-15 16:06:45.767390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.394 [2024-05-15 16:06:45.767400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.394 [2024-05-15 16:06:45.767409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.394 [2024-05-15 16:06:45.767430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.777288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.394 [2024-05-15 16:06:45.777412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.394 [2024-05-15 16:06:45.777435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.394 [2024-05-15 16:06:45.777445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.394 [2024-05-15 16:06:45.777454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.394 [2024-05-15 16:06:45.777473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.787471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.394 [2024-05-15 16:06:45.787593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.394 [2024-05-15 16:06:45.787613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.394 [2024-05-15 16:06:45.787623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.394 [2024-05-15 16:06:45.787631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.394 [2024-05-15 16:06:45.787651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.394 [2024-05-15 16:06:45.797348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.394 [2024-05-15 16:06:45.797466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.394 [2024-05-15 16:06:45.797485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.394 [2024-05-15 16:06:45.797495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.394 [2024-05-15 16:06:45.797503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.394 [2024-05-15 16:06:45.797522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.394 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.807313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.807446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.807465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.807475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.807484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.807504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.817380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.817508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.817527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.817536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.817545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.817568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.827430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.827546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.827566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.827576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.827585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.827604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.837495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.837613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.837632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.837641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.837650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.837670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.847454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.847576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.847595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.847605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.847614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.847633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.857501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.857623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.857642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.857652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.857660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.857679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.867531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.867652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.867671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.867681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.867690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.867709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.877559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.877681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.877700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.877710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.877719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.877737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.887550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.887669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.887687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.887698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.887706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.887726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.897543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.897674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.897692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.897702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.897711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.897729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.395 qpair failed and we were unable to recover it. 00:28:47.395 [2024-05-15 16:06:45.907638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.395 [2024-05-15 16:06:45.907750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.395 [2024-05-15 16:06:45.907769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.395 [2024-05-15 16:06:45.907778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.395 [2024-05-15 16:06:45.907790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.395 [2024-05-15 16:06:45.907810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.396 qpair failed and we were unable to recover it. 00:28:47.396 [2024-05-15 16:06:45.917685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.396 [2024-05-15 16:06:45.917803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.396 [2024-05-15 16:06:45.917822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.396 [2024-05-15 16:06:45.917831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.396 [2024-05-15 16:06:45.917840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.396 [2024-05-15 16:06:45.917859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.396 qpair failed and we were unable to recover it. 00:28:47.396 [2024-05-15 16:06:45.927661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.396 [2024-05-15 16:06:45.927785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.396 [2024-05-15 16:06:45.927806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.396 [2024-05-15 16:06:45.927817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.396 [2024-05-15 16:06:45.927826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.396 [2024-05-15 16:06:45.927847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.396 qpair failed and we were unable to recover it. 00:28:47.396 [2024-05-15 16:06:45.937735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.396 [2024-05-15 16:06:45.937859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.396 [2024-05-15 16:06:45.937878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.396 [2024-05-15 16:06:45.937887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.396 [2024-05-15 16:06:45.937896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.396 [2024-05-15 16:06:45.937915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.396 qpair failed and we were unable to recover it. 00:28:47.396 [2024-05-15 16:06:45.947763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.396 [2024-05-15 16:06:45.947881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.396 [2024-05-15 16:06:45.947901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.396 [2024-05-15 16:06:45.947911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.396 [2024-05-15 16:06:45.947920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.396 [2024-05-15 16:06:45.947939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.396 qpair failed and we were unable to recover it. 00:28:47.654 [2024-05-15 16:06:45.957787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.654 [2024-05-15 16:06:45.958088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.654 [2024-05-15 16:06:45.958106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.654 [2024-05-15 16:06:45.958116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.654 [2024-05-15 16:06:45.958125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.654 [2024-05-15 16:06:45.958145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.654 qpair failed and we were unable to recover it. 00:28:47.654 [2024-05-15 16:06:45.967773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.654 [2024-05-15 16:06:45.967892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.654 [2024-05-15 16:06:45.967911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.654 [2024-05-15 16:06:45.967921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.654 [2024-05-15 16:06:45.967930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.654 [2024-05-15 16:06:45.967949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.654 qpair failed and we were unable to recover it. 00:28:47.654 [2024-05-15 16:06:45.977859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.654 [2024-05-15 16:06:45.978205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.654 [2024-05-15 16:06:45.978224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.654 [2024-05-15 16:06:45.978234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.654 [2024-05-15 16:06:45.978243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.654 [2024-05-15 16:06:45.978262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.654 qpair failed and we were unable to recover it. 00:28:47.654 [2024-05-15 16:06:45.988031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.654 [2024-05-15 16:06:45.988208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.654 [2024-05-15 16:06:45.988226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.654 [2024-05-15 16:06:45.988236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.654 [2024-05-15 16:06:45.988245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.654 [2024-05-15 16:06:45.988265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.654 qpair failed and we were unable to recover it. 00:28:47.654 [2024-05-15 16:06:45.997944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.654 [2024-05-15 16:06:45.998065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.654 [2024-05-15 16:06:45.998083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.654 [2024-05-15 16:06:45.998097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.654 [2024-05-15 16:06:45.998105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:45.998125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.007943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.008064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.008083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.008093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.008102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.008121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.017997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.018118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.018136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.018146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.018154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.018174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.027989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.028108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.028127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.028138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.028146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.028165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.038023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.038141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.038159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.038169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.038178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.038202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.048000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.048119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.048138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.048148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.048156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.048175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.058071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.058210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.058229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.058240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.058249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.058268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.068101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.068227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.068247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.068257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.068265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.068285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.078137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.078262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.078281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.078291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.078299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.078318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.088114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.088244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.088263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.088276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.088284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.088303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.098164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.098288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.098307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.098317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.098325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.098345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.108187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.108311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.108330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.108339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.108348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.108366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.118229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.118450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.118470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.118481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.118490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.118510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.128247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.128363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.128382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.128392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.128401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.128420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.138203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.138324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.138343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.138353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.138361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.138380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.148269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.148426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.148445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.148455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.148463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.148483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.158339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.158473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.158492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.158502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.158511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.158530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.168281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.168396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.168414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.168424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.168432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.168451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.178390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.178526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.178547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.178557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.178566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.178584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.188413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.188532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.188550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.188560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.188569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.188587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.198454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.198574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.198593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.198603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.198611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.198631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.655 [2024-05-15 16:06:46.208467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.655 [2024-05-15 16:06:46.208591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.655 [2024-05-15 16:06:46.208610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.655 [2024-05-15 16:06:46.208620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.655 [2024-05-15 16:06:46.208628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.655 [2024-05-15 16:06:46.208647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.655 qpair failed and we were unable to recover it. 00:28:47.914 [2024-05-15 16:06:46.218507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.914 [2024-05-15 16:06:46.218625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.914 [2024-05-15 16:06:46.218643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.218653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.218662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.218683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.228469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.228585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.228604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.228614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.228622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.228642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.238551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.238671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.238690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.238699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.238708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.238727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.248586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.248700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.248718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.248728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.248737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.248756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.258621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.258742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.258760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.258770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.258779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.258799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.268649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.268768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.268792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.268803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.268812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.268831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.278672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.278790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.278809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.278819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.278827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.278846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.288680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.288800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.288818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.288828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.288837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.288856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.298705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.298825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.298844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.298854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.298863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.298882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.308739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.308856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.308875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.308885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.308896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.308915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.318771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.318892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.318911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.318921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.318929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.318948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.328789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.328905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.328924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.328934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.328942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.328961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.338833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.339124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.339143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.339152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.339161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.339179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.915 qpair failed and we were unable to recover it. 00:28:47.915 [2024-05-15 16:06:46.348784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.915 [2024-05-15 16:06:46.348900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.915 [2024-05-15 16:06:46.348918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.915 [2024-05-15 16:06:46.348928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.915 [2024-05-15 16:06:46.348937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.915 [2024-05-15 16:06:46.348955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.358896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.359017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.359036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.359046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.359054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.359074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.368956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.369085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.369104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.369114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.369122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.369141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.378871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.378993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.379011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.379021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.379030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.379049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.388985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.389103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.389122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.389132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.389141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.389160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.399002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.399121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.399139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.399152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.399161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.399180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.409056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.409173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.409197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.409207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.409215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.409235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.419070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.419194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.419213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.419223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.419231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.419250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.429141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.429272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.429290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.429300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.429309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.429328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.439125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.439251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.439270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.439280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.439288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.439308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.449123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.449249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.449267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.449277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.449286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.449305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.459210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.459329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.459347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.459357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.459366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.459385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:47.916 [2024-05-15 16:06:46.469207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.916 [2024-05-15 16:06:46.469328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.916 [2024-05-15 16:06:46.469347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.916 [2024-05-15 16:06:46.469356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.916 [2024-05-15 16:06:46.469365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:47.916 [2024-05-15 16:06:46.469384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.916 qpair failed and we were unable to recover it. 00:28:48.175 [2024-05-15 16:06:46.479241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.175 [2024-05-15 16:06:46.479353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.175 [2024-05-15 16:06:46.479372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.175 [2024-05-15 16:06:46.479382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.175 [2024-05-15 16:06:46.479390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.175 [2024-05-15 16:06:46.479410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.175 qpair failed and we were unable to recover it. 00:28:48.175 [2024-05-15 16:06:46.489238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.175 [2024-05-15 16:06:46.489358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.175 [2024-05-15 16:06:46.489376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.175 [2024-05-15 16:06:46.489389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.175 [2024-05-15 16:06:46.489398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.175 [2024-05-15 16:06:46.489417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.175 qpair failed and we were unable to recover it. 00:28:48.175 [2024-05-15 16:06:46.499282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.175 [2024-05-15 16:06:46.499398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.175 [2024-05-15 16:06:46.499417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.175 [2024-05-15 16:06:46.499427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.175 [2024-05-15 16:06:46.499435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.175 [2024-05-15 16:06:46.499454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.175 qpair failed and we were unable to recover it. 00:28:48.175 [2024-05-15 16:06:46.509321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.175 [2024-05-15 16:06:46.509438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.175 [2024-05-15 16:06:46.509457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.175 [2024-05-15 16:06:46.509467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.175 [2024-05-15 16:06:46.509475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.175 [2024-05-15 16:06:46.509494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.175 qpair failed and we were unable to recover it. 00:28:48.175 [2024-05-15 16:06:46.519344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.175 [2024-05-15 16:06:46.519459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.519477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.519487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.519495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.519514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.529328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.529456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.529474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.529484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.529493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.529512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.539399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.539522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.539541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.539550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.539559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.539577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.549423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.549540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.549559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.549569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.549577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.549596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.559422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.559538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.559557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.559566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.559575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.559594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.569459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.569576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.569595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.569605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.569613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.569632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.579505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.579629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.579650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.579660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.579668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.579687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.589516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.589631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.589649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.589659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.589667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.589686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.599557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.599679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.599697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.599707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.599715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.599734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.609566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.609687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.609706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.609716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.609724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.609743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.619528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.619686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.619705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.619714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.619723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.619746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.629634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.629751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.629769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.629780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.629788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.629807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.639669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.639788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.639807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.639817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.639826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.639845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.649680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.176 [2024-05-15 16:06:46.649801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.176 [2024-05-15 16:06:46.649820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.176 [2024-05-15 16:06:46.649830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.176 [2024-05-15 16:06:46.649838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.176 [2024-05-15 16:06:46.649857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.176 qpair failed and we were unable to recover it. 00:28:48.176 [2024-05-15 16:06:46.659659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.659775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.659794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.659804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.659812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.659832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.177 [2024-05-15 16:06:46.669738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.669857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.669879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.669889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.669897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.669916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.177 [2024-05-15 16:06:46.679708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.679835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.679854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.679865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.679874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.679893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.177 [2024-05-15 16:06:46.689802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.689960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.689979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.689989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.689998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.690017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.177 [2024-05-15 16:06:46.699837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.699951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.699969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.699979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.699988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.700007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.177 [2024-05-15 16:06:46.709877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.709995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.710014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.710024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.710036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.710056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.177 [2024-05-15 16:06:46.719894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.720187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.720210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.720220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.720228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.720247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.177 [2024-05-15 16:06:46.729904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.177 [2024-05-15 16:06:46.730026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.177 [2024-05-15 16:06:46.730045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.177 [2024-05-15 16:06:46.730055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.177 [2024-05-15 16:06:46.730063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.177 [2024-05-15 16:06:46.730083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.177 qpair failed and we were unable to recover it. 00:28:48.435 [2024-05-15 16:06:46.739933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.435 [2024-05-15 16:06:46.740053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.435 [2024-05-15 16:06:46.740072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.435 [2024-05-15 16:06:46.740082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.435 [2024-05-15 16:06:46.740090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.435 [2024-05-15 16:06:46.740109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.435 qpair failed and we were unable to recover it. 00:28:48.435 [2024-05-15 16:06:46.749973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.435 [2024-05-15 16:06:46.750089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.435 [2024-05-15 16:06:46.750107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.435 [2024-05-15 16:06:46.750117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.435 [2024-05-15 16:06:46.750126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.435 [2024-05-15 16:06:46.750146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.435 qpair failed and we were unable to recover it. 00:28:48.435 [2024-05-15 16:06:46.759921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.435 [2024-05-15 16:06:46.760047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.435 [2024-05-15 16:06:46.760066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.435 [2024-05-15 16:06:46.760076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.435 [2024-05-15 16:06:46.760084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.435 [2024-05-15 16:06:46.760103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.435 qpair failed and we were unable to recover it. 00:28:48.435 [2024-05-15 16:06:46.770018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.435 [2024-05-15 16:06:46.770166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.770186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.770201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.770210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.770229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.780063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.780182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.780206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.780216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.780225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.780244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.790006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.790127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.790145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.790155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.790164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.790182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.800042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.800161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.800180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.800189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.800206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.800225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.810124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.810246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.810265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.810274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.810283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.810302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.820087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.820216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.820233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.820243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.820251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.820270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.830205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.830322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.830341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.830351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.830359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.830379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.840236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.840357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.840376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.840386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.840395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.840416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.850292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.850417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.850437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.850447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.850455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.850474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.860280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.860396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.860414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.860424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.860433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.860452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.436 [2024-05-15 16:06:46.870300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.436 [2024-05-15 16:06:46.870416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.436 [2024-05-15 16:06:46.870435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.436 [2024-05-15 16:06:46.870445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.436 [2024-05-15 16:06:46.870454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.436 [2024-05-15 16:06:46.870473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.436 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.880273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.880390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.880409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.880418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.880427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.880446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.890346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.890466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.890484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.890497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.890506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.890524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.900419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.900537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.900556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.900566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.900575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.900594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.910439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.910560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.910581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.910590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.910599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.910618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.920442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.920559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.920578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.920588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.920597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.920615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.930522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.930691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.930710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.930720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.930728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.930748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.940531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.940655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.940673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.940682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.940691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.940710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.950535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.950700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.950719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.950730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.950739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.950758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.960494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.960612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.960630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.960640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.960649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.960668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.970682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.970816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.970834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.970844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.970852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.970872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.980556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.980674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.980696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.980706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.980714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.980733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.437 [2024-05-15 16:06:46.990639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.437 [2024-05-15 16:06:46.990766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.437 [2024-05-15 16:06:46.990785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.437 [2024-05-15 16:06:46.990795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.437 [2024-05-15 16:06:46.990804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.437 [2024-05-15 16:06:46.990823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.437 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.000615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.000740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.000758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.000768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.000776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.000795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.010693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.010813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.010831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.010841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.010850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.010869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.020659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.020776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.020794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.020804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.020814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.020838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.030747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.031054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.031073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.031083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.031091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.031110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.040787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.040920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.040939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.040948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.040957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.040975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.050824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.050955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.050974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.050984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.050993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.051012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.060770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.060885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.060904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.060914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.060922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.060941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.070802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.070916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.070937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.070947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.070956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.698 [2024-05-15 16:06:47.070974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.698 qpair failed and we were unable to recover it. 00:28:48.698 [2024-05-15 16:06:47.080905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.698 [2024-05-15 16:06:47.081020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.698 [2024-05-15 16:06:47.081038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.698 [2024-05-15 16:06:47.081048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.698 [2024-05-15 16:06:47.081057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.081076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.090860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.090985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.091004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.091014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.091022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.091041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.100981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.101115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.101134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.101144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.101152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.101171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.110986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.111102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.111120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.111130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.111141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.111160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.120937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.121067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.121086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.121096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.121104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.121124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.131033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.131154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.131175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.131185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.131198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.131218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.141079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.141204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.141223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.141233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.141242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.141261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.151103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.151226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.151245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.151255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.151263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.151282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.161060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.161182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.161206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.161216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.161224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.161243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.171144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.171266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.171285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.171295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.171304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.171322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.181157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.181278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.181297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.181307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.181315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.181334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.191213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.191335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.191354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.699 [2024-05-15 16:06:47.191364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.699 [2024-05-15 16:06:47.191373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.699 [2024-05-15 16:06:47.191392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.699 qpair failed and we were unable to recover it. 00:28:48.699 [2024-05-15 16:06:47.201212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.699 [2024-05-15 16:06:47.201329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.699 [2024-05-15 16:06:47.201348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.700 [2024-05-15 16:06:47.201358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.700 [2024-05-15 16:06:47.201369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.700 [2024-05-15 16:06:47.201389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.700 qpair failed and we were unable to recover it. 00:28:48.700 [2024-05-15 16:06:47.211248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.700 [2024-05-15 16:06:47.211368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.700 [2024-05-15 16:06:47.211387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.700 [2024-05-15 16:06:47.211397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.700 [2024-05-15 16:06:47.211406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.700 [2024-05-15 16:06:47.211424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.700 qpair failed and we were unable to recover it. 00:28:48.700 [2024-05-15 16:06:47.221254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.700 [2024-05-15 16:06:47.221408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.700 [2024-05-15 16:06:47.221426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.700 [2024-05-15 16:06:47.221436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.700 [2024-05-15 16:06:47.221444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.700 [2024-05-15 16:06:47.221464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.700 qpair failed and we were unable to recover it. 00:28:48.700 [2024-05-15 16:06:47.231302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.700 [2024-05-15 16:06:47.231422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.700 [2024-05-15 16:06:47.231440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.700 [2024-05-15 16:06:47.231450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.700 [2024-05-15 16:06:47.231459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.700 [2024-05-15 16:06:47.231479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.700 qpair failed and we were unable to recover it. 00:28:48.700 [2024-05-15 16:06:47.241347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.700 [2024-05-15 16:06:47.241467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.700 [2024-05-15 16:06:47.241486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.700 [2024-05-15 16:06:47.241495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.700 [2024-05-15 16:06:47.241504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.700 [2024-05-15 16:06:47.241523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.700 qpair failed and we were unable to recover it. 00:28:48.700 [2024-05-15 16:06:47.251309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.700 [2024-05-15 16:06:47.251428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.700 [2024-05-15 16:06:47.251447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.700 [2024-05-15 16:06:47.251457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.700 [2024-05-15 16:06:47.251466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.700 [2024-05-15 16:06:47.251484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.700 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.261477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.261602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.261620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.261629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.261638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.261657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.271437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.271555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.271573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.271584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.271592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.271611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.281435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.281552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.281571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.281580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.281589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.281608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.291495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.291637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.291655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.291669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.291677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.291695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.301470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.301595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.301613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.301623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.301631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.301650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.311464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.311583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.311602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.311612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.311620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.311640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.321575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.321694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.321712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.321722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.321731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.321750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.331618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.331741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.331759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.331769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.331778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.331797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.341800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.341926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.341944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.341954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.341962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.341982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.351640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.351756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.351775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.351784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.351793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.351813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.361665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.361783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.361802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.361811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.361820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.989 [2024-05-15 16:06:47.361839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.989 qpair failed and we were unable to recover it. 00:28:48.989 [2024-05-15 16:06:47.371696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.989 [2024-05-15 16:06:47.371860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.989 [2024-05-15 16:06:47.371878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.989 [2024-05-15 16:06:47.371888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.989 [2024-05-15 16:06:47.371897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.371916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.381675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.381800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.381823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.381833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.381841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.381860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.391706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.391833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.391852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.391862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.391870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.391889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.401778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.401894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.401912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.401922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.401931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.401950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.411765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.411885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.411903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.411913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.411921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.411940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.421843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.421964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.421982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.421992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.422001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.422023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.431943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.432054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.432073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.432083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.432091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.432111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.441915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.442031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.442050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.442059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.442068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.442086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.451937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.452055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.452074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.452083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.452092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.452111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.461960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.462078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.462097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.462107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.462116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.462135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.472012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.472129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.472151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.472161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.472169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.472188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.482038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.482153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.482172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.990 [2024-05-15 16:06:47.482181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.990 [2024-05-15 16:06:47.482194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.990 [2024-05-15 16:06:47.482214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.990 qpair failed and we were unable to recover it. 00:28:48.990 [2024-05-15 16:06:47.492052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.990 [2024-05-15 16:06:47.492170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.990 [2024-05-15 16:06:47.492188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.991 [2024-05-15 16:06:47.492204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.991 [2024-05-15 16:06:47.492213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.991 [2024-05-15 16:06:47.492231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.991 qpair failed and we were unable to recover it. 00:28:48.991 [2024-05-15 16:06:47.502095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.991 [2024-05-15 16:06:47.502215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.991 [2024-05-15 16:06:47.502234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.991 [2024-05-15 16:06:47.502243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.991 [2024-05-15 16:06:47.502252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.991 [2024-05-15 16:06:47.502272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.991 qpair failed and we were unable to recover it. 00:28:48.991 [2024-05-15 16:06:47.512138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.991 [2024-05-15 16:06:47.512262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.991 [2024-05-15 16:06:47.512281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.991 [2024-05-15 16:06:47.512291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.991 [2024-05-15 16:06:47.512299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.991 [2024-05-15 16:06:47.512321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.991 qpair failed and we were unable to recover it. 00:28:48.991 [2024-05-15 16:06:47.522155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.991 [2024-05-15 16:06:47.522273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.991 [2024-05-15 16:06:47.522291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.991 [2024-05-15 16:06:47.522301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.991 [2024-05-15 16:06:47.522309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.991 [2024-05-15 16:06:47.522329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.991 qpair failed and we were unable to recover it. 00:28:48.991 [2024-05-15 16:06:47.532177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.991 [2024-05-15 16:06:47.532300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.991 [2024-05-15 16:06:47.532319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.991 [2024-05-15 16:06:47.532329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.991 [2024-05-15 16:06:47.532337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.991 [2024-05-15 16:06:47.532356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.991 qpair failed and we were unable to recover it. 00:28:48.991 [2024-05-15 16:06:47.542213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.991 [2024-05-15 16:06:47.542336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.991 [2024-05-15 16:06:47.542355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.991 [2024-05-15 16:06:47.542364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.991 [2024-05-15 16:06:47.542373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:48.991 [2024-05-15 16:06:47.542392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.991 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.552294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.552411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.552431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.552443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.552452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.552472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.562270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.562389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.562407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.562417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.562425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.562444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.572306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.572426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.572444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.572454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.572463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.572482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.582323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.582444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.582462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.582472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.582481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.582499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.592367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.592486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.592504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.592514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.592522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.592541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.602384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.602500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.602518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.602528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.602539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.602558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.612402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.612520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.612538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.612548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.612557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.612576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.622438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.622561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.622579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.622589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.622598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.622617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.632467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.632589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.632608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.632617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.632626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.632645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.642490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.642605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.642624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.642634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.250 [2024-05-15 16:06:47.642642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.250 [2024-05-15 16:06:47.642662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.250 qpair failed and we were unable to recover it. 00:28:49.250 [2024-05-15 16:06:47.652514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.250 [2024-05-15 16:06:47.652639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.250 [2024-05-15 16:06:47.652657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.250 [2024-05-15 16:06:47.652667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.652676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.652695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.662548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.662666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.662685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.662695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.662703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.662722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.672579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.672696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.672714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.672723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.672732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.672750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.682599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.682716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.682735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.682745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.682753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.682773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.692616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.692732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.692751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.692764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.692772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.692791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.702577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.702697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.702715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.702724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.702733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.702751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.712718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.712878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.712896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.712906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.712914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.712933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.722716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.722836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.722854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.722864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.722873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.722892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.732727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.732883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.732902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.732911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.732920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.732939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.742754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.742868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.742887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.742897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.742905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.742923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.752775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.752891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.752910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.752920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.752928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.752947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.762810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.762927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.762946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.762955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.762964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.762983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.772832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.772951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.772970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.772981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.772989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.773008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.782886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.783005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.251 [2024-05-15 16:06:47.783023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.251 [2024-05-15 16:06:47.783037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.251 [2024-05-15 16:06:47.783045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.251 [2024-05-15 16:06:47.783064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.251 qpair failed and we were unable to recover it. 00:28:49.251 [2024-05-15 16:06:47.792911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.251 [2024-05-15 16:06:47.793028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.252 [2024-05-15 16:06:47.793047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.252 [2024-05-15 16:06:47.793057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.252 [2024-05-15 16:06:47.793065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.252 [2024-05-15 16:06:47.793084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.252 qpair failed and we were unable to recover it. 00:28:49.252 [2024-05-15 16:06:47.802937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.252 [2024-05-15 16:06:47.803053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.252 [2024-05-15 16:06:47.803071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.252 [2024-05-15 16:06:47.803081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.252 [2024-05-15 16:06:47.803090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.252 [2024-05-15 16:06:47.803109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.252 qpair failed and we were unable to recover it. 00:28:49.511 [2024-05-15 16:06:47.812946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.511 [2024-05-15 16:06:47.813067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.511 [2024-05-15 16:06:47.813085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.511 [2024-05-15 16:06:47.813095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.511 [2024-05-15 16:06:47.813103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.511 [2024-05-15 16:06:47.813122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.511 qpair failed and we were unable to recover it. 00:28:49.511 [2024-05-15 16:06:47.823007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.511 [2024-05-15 16:06:47.823131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.511 [2024-05-15 16:06:47.823148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.511 [2024-05-15 16:06:47.823158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.511 [2024-05-15 16:06:47.823167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.511 [2024-05-15 16:06:47.823186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.511 qpair failed and we were unable to recover it. 00:28:49.511 [2024-05-15 16:06:47.832964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.511 [2024-05-15 16:06:47.833085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.511 [2024-05-15 16:06:47.833104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.511 [2024-05-15 16:06:47.833114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.511 [2024-05-15 16:06:47.833123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.511 [2024-05-15 16:06:47.833142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.511 qpair failed and we were unable to recover it. 00:28:49.511 [2024-05-15 16:06:47.843058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.511 [2024-05-15 16:06:47.843172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.843195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.843206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.843215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.843234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.853075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.853200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.853219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.853229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.853238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.853258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.863130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.863272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.863290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.863300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.863309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.863328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.873148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.873461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.873484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.873493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.873502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.873521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.883180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.883296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.883314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.883324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.883333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.883352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.893188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.893312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.893330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.893340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.893349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.893368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.903224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.903337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.903355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.903365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.903374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.903393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.913244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.913380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.913399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.913409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.913417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.913439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.923278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.923411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.923429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.923439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.923447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.923466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.933295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.933417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.933435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.933445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.933453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.933472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.943343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.943472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.943490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.943500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.943509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.943528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.953410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.953543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.953562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.953572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.953580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.953601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.963411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.963529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.963551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.963562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.963570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.963589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.973363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.512 [2024-05-15 16:06:47.973481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.512 [2024-05-15 16:06:47.973500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.512 [2024-05-15 16:06:47.973510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.512 [2024-05-15 16:06:47.973518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.512 [2024-05-15 16:06:47.973537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.512 qpair failed and we were unable to recover it. 00:28:49.512 [2024-05-15 16:06:47.983426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:47.983544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:47.983562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:47.983572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:47.983581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:47.983600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:47.993413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:47.993525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:47.993544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:47.993554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:47.993563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:47.993582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:48.003533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:48.003672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:48.003691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:48.003701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:48.003713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:48.003732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:48.013485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:48.013636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:48.013655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:48.013665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:48.013673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:48.013692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:48.023607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:48.023735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:48.023753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:48.023763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:48.023772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:48.023791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:48.033525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:48.033643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:48.033662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:48.033672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:48.033680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:48.033701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:48.043618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:48.043750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:48.043769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:48.043779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:48.043787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:48.043806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:48.053636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:48.053759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:48.053778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:48.053788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:48.053796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:48.053815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.513 [2024-05-15 16:06:48.063679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.513 [2024-05-15 16:06:48.063797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.513 [2024-05-15 16:06:48.063816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.513 [2024-05-15 16:06:48.063826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.513 [2024-05-15 16:06:48.063835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.513 [2024-05-15 16:06:48.063854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.513 qpair failed and we were unable to recover it. 00:28:49.772 [2024-05-15 16:06:48.073620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.073748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.073767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.073777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.073785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.073804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.083745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.083861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.083880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.083890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.083898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.083917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.093748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.093865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.093883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.093896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.093904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.093923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.103811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.103934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.103952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.103962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.103970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.103989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.113749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.113870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.113888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.113898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.113906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.113925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.123843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.123961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.123979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.123989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.123998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.124018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.133862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.133979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.133998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.134008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.134016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.134035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.143904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.144020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.144039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.144049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.144057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.144077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.153957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.154122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.154141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.154151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.154159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.154178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.163949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.164257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.164276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.164285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.164294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.164313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.173969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.174089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.174107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.174117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.174126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.174144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.183998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.184159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.184177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.773 [2024-05-15 16:06:48.184194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.773 [2024-05-15 16:06:48.184203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.773 [2024-05-15 16:06:48.184223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.773 qpair failed and we were unable to recover it. 00:28:49.773 [2024-05-15 16:06:48.194063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.773 [2024-05-15 16:06:48.194184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.773 [2024-05-15 16:06:48.194210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.194219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.194228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.194247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.204060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.204175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.204200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.204211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.204219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.204238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.214087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.214207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.214225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.214235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.214244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.214263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.224129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.224251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.224269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.224279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.224288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.224307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.234087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.234206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.234225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.234235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.234243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.234263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.244184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.244304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.244323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.244333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.244342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.244361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.254212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.254336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.254355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.254365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.254373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.254393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.264254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.264367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.264385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.264395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.264403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.264423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.274269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.274381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.274402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.274413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.274422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.274440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.284337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.284506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.284524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.284534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.284543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.284562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.294305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.294423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.294442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.294452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.294460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.294479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.304340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.304460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.304478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.304488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.304497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.304516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.314384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.774 [2024-05-15 16:06:48.314504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.774 [2024-05-15 16:06:48.314523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.774 [2024-05-15 16:06:48.314533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.774 [2024-05-15 16:06:48.314542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.774 [2024-05-15 16:06:48.314563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.774 qpair failed and we were unable to recover it. 00:28:49.774 [2024-05-15 16:06:48.324427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.775 [2024-05-15 16:06:48.324545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.775 [2024-05-15 16:06:48.324563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.775 [2024-05-15 16:06:48.324573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.775 [2024-05-15 16:06:48.324581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:49.775 [2024-05-15 16:06:48.324600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.775 qpair failed and we were unable to recover it. 00:28:50.034 [2024-05-15 16:06:48.334471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.034 [2024-05-15 16:06:48.334589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.034 [2024-05-15 16:06:48.334607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.034 [2024-05-15 16:06:48.334617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.034 [2024-05-15 16:06:48.334626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.034 [2024-05-15 16:06:48.334644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.034 qpair failed and we were unable to recover it. 00:28:50.034 [2024-05-15 16:06:48.344468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.034 [2024-05-15 16:06:48.344588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.034 [2024-05-15 16:06:48.344607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.034 [2024-05-15 16:06:48.344616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.034 [2024-05-15 16:06:48.344625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.034 [2024-05-15 16:06:48.344644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.034 qpair failed and we were unable to recover it. 00:28:50.034 [2024-05-15 16:06:48.354429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.034 [2024-05-15 16:06:48.354590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.034 [2024-05-15 16:06:48.354609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.034 [2024-05-15 16:06:48.354618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.034 [2024-05-15 16:06:48.354627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.034 [2024-05-15 16:06:48.354646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.034 qpair failed and we were unable to recover it. 00:28:50.034 [2024-05-15 16:06:48.364508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.034 [2024-05-15 16:06:48.364624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.034 [2024-05-15 16:06:48.364646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.034 [2024-05-15 16:06:48.364656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.034 [2024-05-15 16:06:48.364664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.034 [2024-05-15 16:06:48.364683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.034 qpair failed and we were unable to recover it. 00:28:50.034 [2024-05-15 16:06:48.374541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.034 [2024-05-15 16:06:48.374662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.034 [2024-05-15 16:06:48.374680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.034 [2024-05-15 16:06:48.374690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.034 [2024-05-15 16:06:48.374699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.034 [2024-05-15 16:06:48.374718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.034 qpair failed and we were unable to recover it. 00:28:50.034 [2024-05-15 16:06:48.384588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.034 [2024-05-15 16:06:48.384706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.384725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.384734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.384743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.384762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.394616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.394735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.394753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.394763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.394771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.394790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.404574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.404689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.404707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.404717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.404728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.404747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.414662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.414778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.414796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.414807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.414815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.414834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.424696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.424832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.424850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.424860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.424870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.424890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.434664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.434780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.434799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.434809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.434818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.434837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.444763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.444892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.444910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.444920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.444928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.444948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.454720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.454845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.454864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.454876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.454885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.454903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.464800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.464923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.464943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.464952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.464961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.464980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.474848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.474961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.474980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.474990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.474999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.475017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.484896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.485019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.485038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.485048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.485057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.485075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.494833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.494957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.494975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.494985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.494999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.495017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.504929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.505046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.505065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.505074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.505083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.505102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.514963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.515083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.035 [2024-05-15 16:06:48.515101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.035 [2024-05-15 16:06:48.515111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.035 [2024-05-15 16:06:48.515119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.035 [2024-05-15 16:06:48.515138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.035 qpair failed and we were unable to recover it. 00:28:50.035 [2024-05-15 16:06:48.524897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.035 [2024-05-15 16:06:48.525196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.525215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.525224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.525233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.525252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.036 [2024-05-15 16:06:48.535014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.036 [2024-05-15 16:06:48.535159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.535178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.535188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.535202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.535221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.036 [2024-05-15 16:06:48.545051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.036 [2024-05-15 16:06:48.545177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.545199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.545209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.545218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.545237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.036 [2024-05-15 16:06:48.555078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.036 [2024-05-15 16:06:48.555239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.555258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.555268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.555276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.555295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.036 [2024-05-15 16:06:48.565092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.036 [2024-05-15 16:06:48.565211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.565230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.565240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.565249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.565268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.036 [2024-05-15 16:06:48.575220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.036 [2024-05-15 16:06:48.575385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.575405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.575415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.575423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.575442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.036 [2024-05-15 16:06:48.585081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.036 [2024-05-15 16:06:48.585207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.585226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.585240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.585248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.585267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.036 [2024-05-15 16:06:48.595188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.036 [2024-05-15 16:06:48.595311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.036 [2024-05-15 16:06:48.595330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.036 [2024-05-15 16:06:48.595339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.036 [2024-05-15 16:06:48.595348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.036 [2024-05-15 16:06:48.595368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.036 qpair failed and we were unable to recover it. 00:28:50.295 [2024-05-15 16:06:48.605226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.295 [2024-05-15 16:06:48.605346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.295 [2024-05-15 16:06:48.605365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.295 [2024-05-15 16:06:48.605374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.295 [2024-05-15 16:06:48.605383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.295 [2024-05-15 16:06:48.605402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.295 qpair failed and we were unable to recover it. 00:28:50.295 [2024-05-15 16:06:48.615234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.295 [2024-05-15 16:06:48.615353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.295 [2024-05-15 16:06:48.615371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.295 [2024-05-15 16:06:48.615381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.295 [2024-05-15 16:06:48.615389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.295 [2024-05-15 16:06:48.615408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.295 qpair failed and we were unable to recover it. 00:28:50.295 [2024-05-15 16:06:48.625310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.295 [2024-05-15 16:06:48.625429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.295 [2024-05-15 16:06:48.625447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.295 [2024-05-15 16:06:48.625457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.295 [2024-05-15 16:06:48.625465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.295 [2024-05-15 16:06:48.625485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.635229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.635349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.635368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.635378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.635387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.635406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.645316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.645430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.645448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.645458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.645467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.645486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.655325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.655446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.655465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.655475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.655484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.655503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.665419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.665549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.665567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.665577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.665585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.665604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.675423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.675539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.675561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.675571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.675579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.675598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.685454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.685574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.685592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.685602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.685611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.685629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.695492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.695610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.695629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.695638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.695647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.695666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.705478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.705602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.705620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.705630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.705639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.705658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.715502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.715620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.715639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.715649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.715657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.715680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.725491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.725636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.725654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.725664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.725672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.725692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.735517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.735683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.735702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.735712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.735721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.735739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.745768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.746064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.746082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.746091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.746100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.746119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.755632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.296 [2024-05-15 16:06:48.755767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.296 [2024-05-15 16:06:48.755786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.296 [2024-05-15 16:06:48.755796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.296 [2024-05-15 16:06:48.755804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.296 [2024-05-15 16:06:48.755823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.296 qpair failed and we were unable to recover it. 00:28:50.296 [2024-05-15 16:06:48.765652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.765771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.765792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.765802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.765811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.765829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.775709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.775831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.775849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.775859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.775868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.775887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.785638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.785810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.785828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.785838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.785847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.785866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.795751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.795866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.795884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.795894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.795903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.795922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.805934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.806060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.806078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.806088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.806100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.806119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.815756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.815873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.815892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.815902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.815910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.815929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.825980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.826097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.826114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.826124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.826133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.826152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.835784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.835903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.835922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.835932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.835941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.835960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.845833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.845988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.846007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.846017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.846025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.846044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.297 [2024-05-15 16:06:48.855909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.297 [2024-05-15 16:06:48.856031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.297 [2024-05-15 16:06:48.856050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.297 [2024-05-15 16:06:48.856060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.297 [2024-05-15 16:06:48.856068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.297 [2024-05-15 16:06:48.856087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.297 qpair failed and we were unable to recover it. 00:28:50.557 [2024-05-15 16:06:48.865965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.557 [2024-05-15 16:06:48.866085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.557 [2024-05-15 16:06:48.866104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.557 [2024-05-15 16:06:48.866114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.557 [2024-05-15 16:06:48.866122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.557 [2024-05-15 16:06:48.866141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.557 qpair failed and we were unable to recover it. 00:28:50.557 [2024-05-15 16:06:48.875961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.557 [2024-05-15 16:06:48.876077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.557 [2024-05-15 16:06:48.876096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.557 [2024-05-15 16:06:48.876105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.557 [2024-05-15 16:06:48.876114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.557 [2024-05-15 16:06:48.876132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.557 qpair failed and we were unable to recover it. 00:28:50.557 [2024-05-15 16:06:48.885977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.557 [2024-05-15 16:06:48.886097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.557 [2024-05-15 16:06:48.886115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.557 [2024-05-15 16:06:48.886125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.557 [2024-05-15 16:06:48.886133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.557 [2024-05-15 16:06:48.886152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.557 qpair failed and we were unable to recover it. 00:28:50.557 [2024-05-15 16:06:48.896021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.557 [2024-05-15 16:06:48.896138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.557 [2024-05-15 16:06:48.896157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.557 [2024-05-15 16:06:48.896167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.557 [2024-05-15 16:06:48.896179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.557 [2024-05-15 16:06:48.896202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.557 qpair failed and we were unable to recover it. 00:28:50.557 [2024-05-15 16:06:48.905965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.557 [2024-05-15 16:06:48.906265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.906283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.906293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.906301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.906320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.916050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.916172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.916195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.916205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.916214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.916233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.926020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.926137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.926155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.926165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.926173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.926198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.936083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.936206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.936225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.936235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.936243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.936262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.946133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.946289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.946308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.946318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.946327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.946346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.956158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.956285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.956304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.956314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.956322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.956342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.966194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.966313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.966332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.966342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.966351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.966370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.976158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.976286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.976305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.976314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.976323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.976342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.986236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.986384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.986402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.986416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.986424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.986444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:48.996234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:48.996351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:48.996370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:48.996380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:48.996389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:48.996408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:49.006256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:49.006374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:49.006392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:49.006402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:49.006411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:49.006430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:49.016303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:49.016418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:49.016436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:49.016445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:49.016454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:49.016472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:49.026390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:49.026512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:49.026533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:49.026543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:49.026551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:49.026570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:49.036449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.558 [2024-05-15 16:06:49.036585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.558 [2024-05-15 16:06:49.036604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.558 [2024-05-15 16:06:49.036613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.558 [2024-05-15 16:06:49.036622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.558 [2024-05-15 16:06:49.036641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.558 qpair failed and we were unable to recover it. 00:28:50.558 [2024-05-15 16:06:49.046435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.046553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.046572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.046581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.046590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.559 [2024-05-15 16:06:49.046610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.559 [2024-05-15 16:06:49.056471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.056632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.056650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.056660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.056668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.559 [2024-05-15 16:06:49.056687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.559 [2024-05-15 16:06:49.066517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.066659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.066678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.066688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.066696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.559 [2024-05-15 16:06:49.066716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.559 [2024-05-15 16:06:49.076534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.076651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.076673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.076684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.076692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:50.559 [2024-05-15 16:06:49.076712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.559 [2024-05-15 16:06:49.086588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.086744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.086776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.086792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.086804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.559 [2024-05-15 16:06:49.086833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.559 [2024-05-15 16:06:49.096604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.096723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.096744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.096754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.096763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.559 [2024-05-15 16:06:49.096782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.559 [2024-05-15 16:06:49.106619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.106738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.106758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.106769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.106777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.559 [2024-05-15 16:06:49.106796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.559 [2024-05-15 16:06:49.116654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.559 [2024-05-15 16:06:49.116950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.559 [2024-05-15 16:06:49.116968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.559 [2024-05-15 16:06:49.116978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.559 [2024-05-15 16:06:49.116987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.559 [2024-05-15 16:06:49.117005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.559 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.126680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.126792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.126812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.126822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.126831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.126850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.136713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.136833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.136853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.136863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.136872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.136890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.146719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.146835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.146854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.146864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.146873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.146891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.156766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.156912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.156931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.156941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.156949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.156967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.166791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.166949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.166971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.166982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.166990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.167009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.176857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.176978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.176997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.177007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.177016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.177034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.186835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.186955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.186974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.186984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.186993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.187011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.819 [2024-05-15 16:06:49.196783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.819 [2024-05-15 16:06:49.197071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.819 [2024-05-15 16:06:49.197090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.819 [2024-05-15 16:06:49.197100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.819 [2024-05-15 16:06:49.197109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.819 [2024-05-15 16:06:49.197127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.819 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.206893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.207051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.207070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.207079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.207088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.207109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.216934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.217048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.217067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.217077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.217085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.217103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.226955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.227073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.227092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.227102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.227110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.227128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.236963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.237078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.237098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.237108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.237116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.237134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.247060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.247176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.247200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.247211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.247220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.247238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.257051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.257169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.257196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.257206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.257215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.257233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.267070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.267211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.267230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.267240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.267249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.267268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.277081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.277244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.277263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.277273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.277281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.277300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.287135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.287262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.287283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.287292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.287301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.287318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.297160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.297282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.297302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.297312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.297320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.297342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.307180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.307302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.307321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.307331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.307340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.307358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.820 [2024-05-15 16:06:49.317216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.820 [2024-05-15 16:06:49.317331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.820 [2024-05-15 16:06:49.317350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.820 [2024-05-15 16:06:49.317360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.820 [2024-05-15 16:06:49.317368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.820 [2024-05-15 16:06:49.317386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.820 qpair failed and we were unable to recover it. 00:28:50.821 [2024-05-15 16:06:49.327160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.821 [2024-05-15 16:06:49.327284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.821 [2024-05-15 16:06:49.327304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.821 [2024-05-15 16:06:49.327314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.821 [2024-05-15 16:06:49.327322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.821 [2024-05-15 16:06:49.327340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.821 qpair failed and we were unable to recover it. 00:28:50.821 [2024-05-15 16:06:49.337270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.821 [2024-05-15 16:06:49.337390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.821 [2024-05-15 16:06:49.337409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.821 [2024-05-15 16:06:49.337419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.821 [2024-05-15 16:06:49.337428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.821 [2024-05-15 16:06:49.337446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.821 qpair failed and we were unable to recover it. 00:28:50.821 [2024-05-15 16:06:49.347224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.821 [2024-05-15 16:06:49.347343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.821 [2024-05-15 16:06:49.347365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.821 [2024-05-15 16:06:49.347375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.821 [2024-05-15 16:06:49.347383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.821 [2024-05-15 16:06:49.347401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.821 qpair failed and we were unable to recover it. 00:28:50.821 [2024-05-15 16:06:49.357372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.821 [2024-05-15 16:06:49.357504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.821 [2024-05-15 16:06:49.357523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.821 [2024-05-15 16:06:49.357533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.821 [2024-05-15 16:06:49.357542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.821 [2024-05-15 16:06:49.357559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.821 qpair failed and we were unable to recover it. 00:28:50.821 [2024-05-15 16:06:49.367341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.821 [2024-05-15 16:06:49.367456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.821 [2024-05-15 16:06:49.367475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.821 [2024-05-15 16:06:49.367485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.821 [2024-05-15 16:06:49.367494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.821 [2024-05-15 16:06:49.367512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.821 qpair failed and we were unable to recover it. 00:28:50.821 [2024-05-15 16:06:49.377392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.821 [2024-05-15 16:06:49.377513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.821 [2024-05-15 16:06:49.377532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.821 [2024-05-15 16:06:49.377542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.821 [2024-05-15 16:06:49.377551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:50.821 [2024-05-15 16:06:49.377568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.821 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.387415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.387549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.387568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.387577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.387586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.387607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.397366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.397483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.397502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.397512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.397521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.397538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.407457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.407573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.407592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.407602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.407610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.407628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.417514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.417631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.417650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.417660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.417669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.417686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.427439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.427557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.427576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.427586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.427595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.427613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.437537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.437657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.437680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.437690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.437698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.437716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.447569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.447691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.447710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.447720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.447728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.447746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.457585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.457702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.457721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.457731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.457739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.457757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.467595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.081 [2024-05-15 16:06:49.467717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.081 [2024-05-15 16:06:49.467736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.081 [2024-05-15 16:06:49.467746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.081 [2024-05-15 16:06:49.467755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.081 [2024-05-15 16:06:49.467772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.081 qpair failed and we were unable to recover it. 00:28:51.081 [2024-05-15 16:06:49.477676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.477800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.477819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.477829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.477842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.477861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.487684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.487801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.487820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.487830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.487839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.487858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.497715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.497835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.497854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.497864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.497873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.497891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.507783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.507901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.507920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.507930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.507938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.507957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.517693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.517807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.517826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.517836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.517845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.517863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.527792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.527913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.527931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.527941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.527950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.527968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.537801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.537927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.537947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.537957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.537966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.537985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.547839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.547982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.548002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.548012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.548021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.548039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.557863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.557979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.557998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.558008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.558017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.558035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.567903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.568016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.568036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.568046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.568057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.568076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.577936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.578053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.578072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.578082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.578090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.578108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.587944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.588248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.588267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.588277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.588286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.082 [2024-05-15 16:06:49.588304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.082 qpair failed and we were unable to recover it. 00:28:51.082 [2024-05-15 16:06:49.598026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.082 [2024-05-15 16:06:49.598143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.082 [2024-05-15 16:06:49.598163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.082 [2024-05-15 16:06:49.598173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.082 [2024-05-15 16:06:49.598181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.083 [2024-05-15 16:06:49.598206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.083 qpair failed and we were unable to recover it. 00:28:51.083 [2024-05-15 16:06:49.607925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.083 [2024-05-15 16:06:49.608044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.083 [2024-05-15 16:06:49.608063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.083 [2024-05-15 16:06:49.608072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.083 [2024-05-15 16:06:49.608081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.083 [2024-05-15 16:06:49.608099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.083 qpair failed and we were unable to recover it. 00:28:51.083 [2024-05-15 16:06:49.618046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.083 [2024-05-15 16:06:49.618167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.083 [2024-05-15 16:06:49.618187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.083 [2024-05-15 16:06:49.618203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.083 [2024-05-15 16:06:49.618211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.083 [2024-05-15 16:06:49.618230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.083 qpair failed and we were unable to recover it. 00:28:51.083 [2024-05-15 16:06:49.628072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.083 [2024-05-15 16:06:49.628187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.083 [2024-05-15 16:06:49.628210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.083 [2024-05-15 16:06:49.628219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.083 [2024-05-15 16:06:49.628228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.083 [2024-05-15 16:06:49.628246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.083 qpair failed and we were unable to recover it. 00:28:51.083 [2024-05-15 16:06:49.638108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.083 [2024-05-15 16:06:49.638246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.083 [2024-05-15 16:06:49.638266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.083 [2024-05-15 16:06:49.638276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.083 [2024-05-15 16:06:49.638284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.083 [2024-05-15 16:06:49.638302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.083 qpair failed and we were unable to recover it. 00:28:51.342 [2024-05-15 16:06:49.648121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.342 [2024-05-15 16:06:49.648241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.342 [2024-05-15 16:06:49.648261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.342 [2024-05-15 16:06:49.648270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.342 [2024-05-15 16:06:49.648279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.342 [2024-05-15 16:06:49.648297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.342 qpair failed and we were unable to recover it. 00:28:51.342 [2024-05-15 16:06:49.658159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.342 [2024-05-15 16:06:49.658279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.342 [2024-05-15 16:06:49.658298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.342 [2024-05-15 16:06:49.658309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.342 [2024-05-15 16:06:49.658320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.342 [2024-05-15 16:06:49.658338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.342 qpair failed and we were unable to recover it. 00:28:51.342 [2024-05-15 16:06:49.668228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.342 [2024-05-15 16:06:49.668347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.342 [2024-05-15 16:06:49.668366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.342 [2024-05-15 16:06:49.668376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.342 [2024-05-15 16:06:49.668384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.342 [2024-05-15 16:06:49.668402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.342 qpair failed and we were unable to recover it. 00:28:51.342 [2024-05-15 16:06:49.678221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.342 [2024-05-15 16:06:49.678517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.342 [2024-05-15 16:06:49.678536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.342 [2024-05-15 16:06:49.678546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.342 [2024-05-15 16:06:49.678555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.342 [2024-05-15 16:06:49.678573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.342 qpair failed and we were unable to recover it. 00:28:51.342 [2024-05-15 16:06:49.688235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.342 [2024-05-15 16:06:49.688350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.342 [2024-05-15 16:06:49.688370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.342 [2024-05-15 16:06:49.688380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.342 [2024-05-15 16:06:49.688388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.342 [2024-05-15 16:06:49.688406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.342 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.698279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.698399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.698418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.698428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.698436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x644560 00:28:51.343 [2024-05-15 16:06:49.698454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.708276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.708433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.708463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.708478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.708490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.708518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.718319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.718437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.718456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.718467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.718475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.718496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.728345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.728467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.728487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.728498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.728507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.728526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.738318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.738436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.738456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.738466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.738474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.738493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.748339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.748461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.748479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.748492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.748501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.748520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.758413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.758529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.758548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.758558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.758567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.758586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.768455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.768571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.768590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.768601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.768609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.768629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.778490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.778606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.778625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.778635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.778644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.778663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.788481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.788599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.788618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.788628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.788637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.788656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.798512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.798633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.798651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.798661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.798670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.798689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.808546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.808664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.808683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.808692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.808701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.808721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.818645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.818799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.818817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.818827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.818836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.343 [2024-05-15 16:06:49.818855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.343 qpair failed and we were unable to recover it. 00:28:51.343 [2024-05-15 16:06:49.828621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.343 [2024-05-15 16:06:49.828742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.343 [2024-05-15 16:06:49.828760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.343 [2024-05-15 16:06:49.828770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.343 [2024-05-15 16:06:49.828778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.828798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.344 [2024-05-15 16:06:49.838643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.344 [2024-05-15 16:06:49.838762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.344 [2024-05-15 16:06:49.838785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.344 [2024-05-15 16:06:49.838795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.344 [2024-05-15 16:06:49.838803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.838823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.344 [2024-05-15 16:06:49.848664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.344 [2024-05-15 16:06:49.848782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.344 [2024-05-15 16:06:49.848800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.344 [2024-05-15 16:06:49.848810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.344 [2024-05-15 16:06:49.848819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.848838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.344 [2024-05-15 16:06:49.858697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.344 [2024-05-15 16:06:49.858814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.344 [2024-05-15 16:06:49.858833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.344 [2024-05-15 16:06:49.858842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.344 [2024-05-15 16:06:49.858851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.858869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.344 [2024-05-15 16:06:49.868740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.344 [2024-05-15 16:06:49.868874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.344 [2024-05-15 16:06:49.868893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.344 [2024-05-15 16:06:49.868903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.344 [2024-05-15 16:06:49.868912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.868930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.344 [2024-05-15 16:06:49.878791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.344 [2024-05-15 16:06:49.878955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.344 [2024-05-15 16:06:49.878974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.344 [2024-05-15 16:06:49.878984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.344 [2024-05-15 16:06:49.878993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.879015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.344 [2024-05-15 16:06:49.888788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.344 [2024-05-15 16:06:49.888899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.344 [2024-05-15 16:06:49.888917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.344 [2024-05-15 16:06:49.888927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.344 [2024-05-15 16:06:49.888935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.888955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.344 [2024-05-15 16:06:49.898822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.344 [2024-05-15 16:06:49.898938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.344 [2024-05-15 16:06:49.898957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.344 [2024-05-15 16:06:49.898967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.344 [2024-05-15 16:06:49.898975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.344 [2024-05-15 16:06:49.898994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.344 qpair failed and we were unable to recover it. 00:28:51.603 [2024-05-15 16:06:49.908895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.603 [2024-05-15 16:06:49.909055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.603 [2024-05-15 16:06:49.909077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.603 [2024-05-15 16:06:49.909088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.603 [2024-05-15 16:06:49.909097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.603 [2024-05-15 16:06:49.909117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.603 qpair failed and we were unable to recover it. 00:28:51.603 [2024-05-15 16:06:49.918905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.603 [2024-05-15 16:06:49.919031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.603 [2024-05-15 16:06:49.919051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.603 [2024-05-15 16:06:49.919062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.603 [2024-05-15 16:06:49.919071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.603 [2024-05-15 16:06:49.919091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.603 qpair failed and we were unable to recover it. 00:28:51.603 [2024-05-15 16:06:49.928923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.603 [2024-05-15 16:06:49.929041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.603 [2024-05-15 16:06:49.929065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.603 [2024-05-15 16:06:49.929075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.603 [2024-05-15 16:06:49.929084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.603 [2024-05-15 16:06:49.929104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.603 qpair failed and we were unable to recover it. 00:28:51.603 [2024-05-15 16:06:49.938948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.603 [2024-05-15 16:06:49.939069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.603 [2024-05-15 16:06:49.939088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.603 [2024-05-15 16:06:49.939098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.603 [2024-05-15 16:06:49.939106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.603 [2024-05-15 16:06:49.939125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.603 qpair failed and we were unable to recover it. 00:28:51.603 [2024-05-15 16:06:49.948976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.603 [2024-05-15 16:06:49.949096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.603 [2024-05-15 16:06:49.949114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.603 [2024-05-15 16:06:49.949124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.603 [2024-05-15 16:06:49.949133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.603 [2024-05-15 16:06:49.949152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.603 qpair failed and we were unable to recover it. 00:28:51.603 [2024-05-15 16:06:49.959003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.603 [2024-05-15 16:06:49.959123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.603 [2024-05-15 16:06:49.959142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.603 [2024-05-15 16:06:49.959151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.603 [2024-05-15 16:06:49.959160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.603 [2024-05-15 16:06:49.959179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.603 qpair failed and we were unable to recover it. 00:28:51.603 [2024-05-15 16:06:49.969036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.603 [2024-05-15 16:06:49.969173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.603 [2024-05-15 16:06:49.969198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.603 [2024-05-15 16:06:49.969209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.603 [2024-05-15 16:06:49.969218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:49.969240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:49.979068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:49.979187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:49.979211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:49.979221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:49.979229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:49.979248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:49.989117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:49.989257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:49.989276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:49.989285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:49.989294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:49.989313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:49.999127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:49.999259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:49.999278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:49.999288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:49.999297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:49.999315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.009170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.009298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.009317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.009328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.009336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.009355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.019200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.019383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.019406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.019417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.019426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.019446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.029219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.029524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.029542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.029552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.029561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.029580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.039224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.039350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.039371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.039381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.039390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.039411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.049331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.049494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.049534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.049545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.049555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.049576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.059311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.059443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.059462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.059472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.059485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.059505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.069327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.069447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.069465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.069475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.069484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.069503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.079357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.079482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.079501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.079511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.079520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.079539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.089408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.089548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.089567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.089577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.089586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.089605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.604 [2024-05-15 16:06:50.099372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.604 [2024-05-15 16:06:50.099493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.604 [2024-05-15 16:06:50.099512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.604 [2024-05-15 16:06:50.099523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.604 [2024-05-15 16:06:50.099531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.604 [2024-05-15 16:06:50.099550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.604 qpair failed and we were unable to recover it. 00:28:51.605 [2024-05-15 16:06:50.109388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.605 [2024-05-15 16:06:50.109522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.605 [2024-05-15 16:06:50.109541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.605 [2024-05-15 16:06:50.109551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.605 [2024-05-15 16:06:50.109560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.605 [2024-05-15 16:06:50.109579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.605 qpair failed and we were unable to recover it. 00:28:51.605 [2024-05-15 16:06:50.119417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.605 [2024-05-15 16:06:50.119541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.605 [2024-05-15 16:06:50.119559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.605 [2024-05-15 16:06:50.119569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.605 [2024-05-15 16:06:50.119578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.605 [2024-05-15 16:06:50.119597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.605 qpair failed and we were unable to recover it. 00:28:51.605 [2024-05-15 16:06:50.129502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.605 [2024-05-15 16:06:50.129628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.605 [2024-05-15 16:06:50.129647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.605 [2024-05-15 16:06:50.129657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.605 [2024-05-15 16:06:50.129666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.605 [2024-05-15 16:06:50.129685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.605 qpair failed and we were unable to recover it. 00:28:51.605 [2024-05-15 16:06:50.139473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.605 [2024-05-15 16:06:50.139593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.605 [2024-05-15 16:06:50.139612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.605 [2024-05-15 16:06:50.139622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.605 [2024-05-15 16:06:50.139631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.605 [2024-05-15 16:06:50.139650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.605 qpair failed and we were unable to recover it. 00:28:51.605 [2024-05-15 16:06:50.149587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.605 [2024-05-15 16:06:50.149709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.605 [2024-05-15 16:06:50.149728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.605 [2024-05-15 16:06:50.149741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.605 [2024-05-15 16:06:50.149750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.605 [2024-05-15 16:06:50.149769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.605 qpair failed and we were unable to recover it. 00:28:51.605 [2024-05-15 16:06:50.159750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.605 [2024-05-15 16:06:50.159867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.605 [2024-05-15 16:06:50.159885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.605 [2024-05-15 16:06:50.159895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.605 [2024-05-15 16:06:50.159903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.605 [2024-05-15 16:06:50.159923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.605 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.169606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.169732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.169754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.169765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.169774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.169794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.179640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.179761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.179781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.179792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.179801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.179820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.189681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.189799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.189818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.189828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.189837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.189857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.199687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.199812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.199833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.199843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.199852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.199871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.209674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.209793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.209812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.209822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.209830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.209849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.219699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.219816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.219835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.219845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.219854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.219873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.229709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.229832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.229851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.229861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.229869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.229889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.239749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.239897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.239917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.239930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.239938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.239958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.249771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.249889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.249908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.249918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.249927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.249946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.259870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.260019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.864 [2024-05-15 16:06:50.260038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.864 [2024-05-15 16:06:50.260048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.864 [2024-05-15 16:06:50.260057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.864 [2024-05-15 16:06:50.260075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.864 qpair failed and we were unable to recover it. 00:28:51.864 [2024-05-15 16:06:50.269828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.864 [2024-05-15 16:06:50.269947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.269965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.269975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.269983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.270002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.279855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.279968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.279986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.279995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.280004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.280023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.289937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.290255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.290273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.290283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.290292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.290311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.299993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.300120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.300139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.300149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.300157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.300176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.309957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.310078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.310097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.310106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.310115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.310133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.320052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.320167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.320185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.320200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.320209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.320228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.330082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.330209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.330231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.330241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.330250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.330269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.340108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.340234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.340253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.340263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.340271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.340290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.350162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.350287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.350306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.350315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.350324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.350342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.360149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.360279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.360298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.360307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.360316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.360335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.370178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.370307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.370326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.370336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.370344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.370366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.380264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.380390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.380409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.380419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.380427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.380446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.390259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.390377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.390396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.390406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.390414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.390434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.865 [2024-05-15 16:06:50.400267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.865 [2024-05-15 16:06:50.400403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.865 [2024-05-15 16:06:50.400422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.865 [2024-05-15 16:06:50.400432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.865 [2024-05-15 16:06:50.400440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.865 [2024-05-15 16:06:50.400459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.865 qpair failed and we were unable to recover it. 00:28:51.866 [2024-05-15 16:06:50.410296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.866 [2024-05-15 16:06:50.410429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.866 [2024-05-15 16:06:50.410447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.866 [2024-05-15 16:06:50.410457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.866 [2024-05-15 16:06:50.410465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.866 [2024-05-15 16:06:50.410485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.866 qpair failed and we were unable to recover it. 00:28:51.866 [2024-05-15 16:06:50.420354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.866 [2024-05-15 16:06:50.420474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.866 [2024-05-15 16:06:50.420495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.866 [2024-05-15 16:06:50.420505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.866 [2024-05-15 16:06:50.420514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:51.866 [2024-05-15 16:06:50.420533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.866 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.430345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.430463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.430485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.430495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.430504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.430524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.125 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.440319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.440442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.440463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.440473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.440482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.440502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.125 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.450435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.450553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.450572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.450582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.450591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.450612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.125 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.460377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.460497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.460516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.460526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.460538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.460557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.125 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.470487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.470610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.470629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.470639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.470648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.470667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.125 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.480419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.480537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.480556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.480566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.480575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.480594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.125 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.490541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.490671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.490689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.490699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.490707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.490726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.125 qpair failed and we were unable to recover it. 00:28:52.125 [2024-05-15 16:06:50.500575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.125 [2024-05-15 16:06:50.500693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.125 [2024-05-15 16:06:50.500712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.125 [2024-05-15 16:06:50.500722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.125 [2024-05-15 16:06:50.500730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.125 [2024-05-15 16:06:50.500749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.510551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.510680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.510699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.510709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.510717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.510736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.520594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.520707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.520725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.520735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.520744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.520763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.530623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.530743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.530766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.530776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.530784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.530803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.540599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.540722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.540742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.540753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.540761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.540781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.550664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.550784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.550803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.550816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.550825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.550844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.560713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.560833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.560851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.560861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.560870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.560890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.570763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.570882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.570900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.570910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.570919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.570937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.580812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.580931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.580951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.580960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.580969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.580988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.590782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.590902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.590921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.590931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.590939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.590958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.600758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.600879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.600899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.600909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.600917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.600936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.610865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.610996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.611015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.611024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.611033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.611052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.620915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.621062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.621081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.621091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.621099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.621117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.630847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.630967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.630986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.630996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.126 [2024-05-15 16:06:50.631005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.126 [2024-05-15 16:06:50.631023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 16:06:50.640923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.126 [2024-05-15 16:06:50.641042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.126 [2024-05-15 16:06:50.641061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.126 [2024-05-15 16:06:50.641074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.127 [2024-05-15 16:06:50.641082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.127 [2024-05-15 16:06:50.641101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 16:06:50.650998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.127 [2024-05-15 16:06:50.651113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.127 [2024-05-15 16:06:50.651132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.127 [2024-05-15 16:06:50.651142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.127 [2024-05-15 16:06:50.651150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.127 [2024-05-15 16:06:50.651169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 16:06:50.661051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.127 [2024-05-15 16:06:50.661179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.127 [2024-05-15 16:06:50.661222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.127 [2024-05-15 16:06:50.661233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.127 [2024-05-15 16:06:50.661242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.127 [2024-05-15 16:06:50.661262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 16:06:50.671030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.127 [2024-05-15 16:06:50.671154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.127 [2024-05-15 16:06:50.671173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.127 [2024-05-15 16:06:50.671183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.127 [2024-05-15 16:06:50.671194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.127 [2024-05-15 16:06:50.671214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 16:06:50.681064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.127 [2024-05-15 16:06:50.681180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.127 [2024-05-15 16:06:50.681204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.127 [2024-05-15 16:06:50.681214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.127 [2024-05-15 16:06:50.681223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.127 [2024-05-15 16:06:50.681242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.386 [2024-05-15 16:06:50.691102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.691227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.691249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.691259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.691268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.691289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.701125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.701250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.701272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.701283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.701292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.701312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.711155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.711283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.711302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.711311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.711320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.711339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.721159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.721284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.721302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.721312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.721320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.721339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.731232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.731354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.731375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.731385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.731394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.731413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.741245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.741364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.741382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.741391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.741400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.741418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.751262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.751383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.751401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.751410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.751419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.751437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.761289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.761409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.761427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.761436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.761444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.761463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.771308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.771427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.771444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.771454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.771462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.771485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.781330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.781451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.781470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.781480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.781489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.781507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.791413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.791532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.791550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.791559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.791568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.791587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.801321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.801438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.801456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.801466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.801474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.801493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.811410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.811573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.811591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.811601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.387 [2024-05-15 16:06:50.811609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.387 [2024-05-15 16:06:50.811628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.387 qpair failed and we were unable to recover it. 00:28:52.387 [2024-05-15 16:06:50.821479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.387 [2024-05-15 16:06:50.821601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.387 [2024-05-15 16:06:50.821622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.387 [2024-05-15 16:06:50.821632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.821640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.821660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.831483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.831616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.831634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.831644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.831653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.831672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.841522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.841642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.841662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.841672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.841681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.841700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.851545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.851665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.851683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.851692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.851701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.851720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.861571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.861694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.861712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.861722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.861734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.861752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.871600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.871717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.871735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.871745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.871754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.871772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.881622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.881775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.881793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.881803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.881811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.881830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.891659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.891774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.891792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.891802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.891810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.891829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.901692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.901815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.901834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.901844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.901853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.901872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.911748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.911868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.911886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.911896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.911904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.911923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.921737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.921854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.921872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.921882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.921890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.921909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.931708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.931826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.931844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.931854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.931862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.931881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.388 [2024-05-15 16:06:50.941805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.388 [2024-05-15 16:06:50.941923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.388 [2024-05-15 16:06:50.941941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.388 [2024-05-15 16:06:50.941950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.388 [2024-05-15 16:06:50.941959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.388 [2024-05-15 16:06:50.941977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.388 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:50.951764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:50.951887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:50.951908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:50.951918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:50.951931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:50.951951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:50.961880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:50.961999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:50.962020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:50.962030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:50.962039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:50.962059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:50.971908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:50.972025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:50.972043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:50.972053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:50.972061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:50.972081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:50.981930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:50.982049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:50.982067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:50.982077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:50.982086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:50.982105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:50.991913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:50.992036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:50.992054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:50.992063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:50.992072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:50.992090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:51.001924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:51.002041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:51.002059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:51.002069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:51.002077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:51.002096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:51.012002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:51.012114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:51.012132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:51.012141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:51.012150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:51.012169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:51.022027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:51.022144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:51.022161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:51.022171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:51.022179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:51.022202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:51.032069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:51.032185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:51.032209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:51.032218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:51.032227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f74000b90 00:28:52.648 [2024-05-15 16:06:51.032246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.648 qpair failed and we were unable to recover it. 00:28:52.648 [2024-05-15 16:06:51.042103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.648 [2024-05-15 16:06:51.042279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.648 [2024-05-15 16:06:51.042309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.648 [2024-05-15 16:06:51.042328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.648 [2024-05-15 16:06:51.042342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f64000b90 00:28:52.648 [2024-05-15 16:06:51.042371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:52.649 qpair failed and we were unable to recover it. 00:28:52.649 [2024-05-15 16:06:51.052110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.649 [2024-05-15 16:06:51.052240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.649 [2024-05-15 16:06:51.052259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.649 [2024-05-15 16:06:51.052270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.649 [2024-05-15 16:06:51.052278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f64000b90 00:28:52.649 [2024-05-15 16:06:51.052298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:52.649 qpair failed and we were unable to recover it. 00:28:52.649 [2024-05-15 16:06:51.062100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.649 [2024-05-15 16:06:51.062227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.649 [2024-05-15 16:06:51.062245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.649 [2024-05-15 16:06:51.062255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.649 [2024-05-15 16:06:51.062264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f64000b90 00:28:52.649 [2024-05-15 16:06:51.062283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:52.649 qpair failed and we were unable to recover it. 00:28:52.649 [2024-05-15 16:06:51.072158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.649 [2024-05-15 16:06:51.072454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.649 [2024-05-15 16:06:51.072478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.649 [2024-05-15 16:06:51.072489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.649 [2024-05-15 16:06:51.072499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:52.649 [2024-05-15 16:06:51.072521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:52.649 qpair failed and we were unable to recover it. 00:28:52.649 [2024-05-15 16:06:51.082195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.649 [2024-05-15 16:06:51.082320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.649 [2024-05-15 16:06:51.082338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.649 [2024-05-15 16:06:51.082348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.649 [2024-05-15 16:06:51.082357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f6c000b90 00:28:52.649 [2024-05-15 16:06:51.082376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:52.649 qpair failed and we were unable to recover it. 00:28:52.649 [2024-05-15 16:06:51.082454] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:52.649 A controller has encountered a failure and is being reset. 00:28:52.649 Controller properly reset. 00:28:52.649 Initializing NVMe Controllers 00:28:52.649 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:52.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:52.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:52.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:52.649 Initialization complete. Launching workers. 00:28:52.649 Starting thread on core 1 00:28:52.649 Starting thread on core 2 00:28:52.649 Starting thread on core 3 00:28:52.649 Starting thread on core 0 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:52.649 00:28:52.649 real 0m11.384s 00:28:52.649 user 0m19.955s 00:28:52.649 sys 0m4.748s 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.649 ************************************ 00:28:52.649 END TEST nvmf_target_disconnect_tc2 00:28:52.649 ************************************ 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:52.649 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:52.908 rmmod nvme_tcp 00:28:52.908 rmmod nvme_fabrics 00:28:52.908 rmmod nvme_keyring 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3924113 ']' 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3924113 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3924113 ']' 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3924113 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924113 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924113' 00:28:52.908 killing process with pid 3924113 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3924113 00:28:52.908 [2024-05-15 16:06:51.320672] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:52.908 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3924113 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.166 16:06:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.067 16:06:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:55.067 00:28:55.067 real 0m20.706s 00:28:55.067 user 0m47.476s 00:28:55.067 sys 0m10.328s 00:28:55.067 16:06:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:55.067 16:06:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:55.067 ************************************ 00:28:55.067 END TEST nvmf_target_disconnect 00:28:55.067 ************************************ 00:28:55.325 16:06:53 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:28:55.325 16:06:53 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.325 16:06:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:55.325 16:06:53 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:55.325 00:28:55.325 real 22m10.863s 00:28:55.325 user 45m38.512s 00:28:55.325 sys 8m4.015s 00:28:55.325 16:06:53 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:55.325 16:06:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:55.325 ************************************ 00:28:55.325 END TEST nvmf_tcp 00:28:55.325 ************************************ 00:28:55.325 16:06:53 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:28:55.325 16:06:53 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:55.325 16:06:53 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:55.325 16:06:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:55.325 16:06:53 -- common/autotest_common.sh@10 -- # set +x 00:28:55.325 ************************************ 00:28:55.325 START TEST spdkcli_nvmf_tcp 00:28:55.325 ************************************ 00:28:55.325 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:55.325 * Looking for test storage... 00:28:55.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:55.325 16:06:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:55.325 16:06:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:55.325 16:06:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:55.325 16:06:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.325 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3925853 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3925853 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3925853 ']' 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:55.583 16:06:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:55.583 [2024-05-15 16:06:53.969150] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:55.583 [2024-05-15 16:06:53.969205] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925853 ] 00:28:55.583 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.583 [2024-05-15 16:06:54.037619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:55.583 [2024-05-15 16:06:54.112558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.583 [2024-05-15 16:06:54.112562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.517 16:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:56.517 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:56.517 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:56.517 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:56.517 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:56.517 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:56.517 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:56.517 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:56.517 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:56.517 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:56.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:56.517 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:56.517 ' 00:28:59.043 [2024-05-15 16:06:57.209832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.975 [2024-05-15 16:06:58.385304] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:59.975 [2024-05-15 16:06:58.385690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:02.500 [2024-05-15 16:07:00.564388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:04.396 [2024-05-15 16:07:02.474219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:05.767 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:05.767 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:05.767 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:05.767 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:05.767 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:05.767 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:05.767 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:05.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:05.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:05.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:05.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:05.767 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:05.767 16:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.025 16:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:06.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:06.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:06.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:06.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:06.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:06.025 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:06.025 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:06.025 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:06.025 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:06.025 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:06.025 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:06.025 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:06.025 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:06.025 ' 00:29:11.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:11.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:11.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:11.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:11.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:11.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:11.345 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:11.345 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:11.345 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:11.345 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:11.345 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:11.345 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:11.345 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:11.345 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3925853 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3925853 ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3925853 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3925853 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3925853' 00:29:11.345 killing process with pid 3925853 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3925853 00:29:11.345 [2024-05-15 16:07:09.567097] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3925853 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3925853 ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3925853 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3925853 ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3925853 00:29:11.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3925853) - No such process 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3925853 is not found' 00:29:11.345 Process with pid 3925853 is not found 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:11.345 00:29:11.345 real 0m15.985s 00:29:11.345 user 0m33.042s 00:29:11.345 sys 0m0.868s 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:11.345 16:07:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.345 ************************************ 00:29:11.345 END TEST spdkcli_nvmf_tcp 00:29:11.345 ************************************ 00:29:11.345 16:07:09 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:11.345 16:07:09 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:11.345 16:07:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:11.345 16:07:09 -- common/autotest_common.sh@10 -- # set +x 00:29:11.345 ************************************ 00:29:11.345 START TEST nvmf_identify_passthru 00:29:11.345 ************************************ 00:29:11.345 16:07:09 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:11.604 * Looking for test storage... 00:29:11.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:11.605 16:07:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.605 16:07:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.605 16:07:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.605 16:07:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.605 16:07:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.605 16:07:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.605 16:07:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.605 16:07:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:11.605 16:07:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.605 16:07:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.605 16:07:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:11.605 16:07:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:11.605 16:07:09 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.605 16:07:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:18.167 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:18.167 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:18.167 Found net devices under 0000:af:00.0: cvl_0_0 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:18.167 Found net devices under 0000:af:00.1: cvl_0_1 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:29:18.167 00:29:18.167 --- 10.0.0.2 ping statistics --- 00:29:18.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.167 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:29:18.167 00:29:18.167 --- 10.0.0.1 ping statistics --- 00:29:18.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.167 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.167 16:07:16 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.425 16:07:16 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:18.425 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:18.425 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:18.425 16:07:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:18.425 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:18.425 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:29:18.426 16:07:16 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:d8:00.0 00:29:18.426 16:07:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:29:18.426 16:07:16 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:29:18.426 16:07:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:18.426 16:07:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:18.426 16:07:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:18.426 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.687 16:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:29:23.687 16:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:23.687 16:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:23.687 16:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:23.687 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.864 16:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:27.864 16:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:27.864 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.864 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.864 16:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:27.864 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:27.864 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.864 16:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3933340 00:29:27.864 16:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:27.864 16:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3933340 00:29:27.864 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3933340 ']' 00:29:27.865 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.865 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:27.865 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.865 16:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:27.865 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:27.865 16:07:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.865 [2024-05-15 16:07:26.311889] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:29:27.865 [2024-05-15 16:07:26.311940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.865 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.865 [2024-05-15 16:07:26.384740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.122 [2024-05-15 16:07:26.459575] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.122 [2024-05-15 16:07:26.459610] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.122 [2024-05-15 16:07:26.459619] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.122 [2024-05-15 16:07:26.459627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.122 [2024-05-15 16:07:26.459634] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.122 [2024-05-15 16:07:26.459733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.122 [2024-05-15 16:07:26.459829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.122 [2024-05-15 16:07:26.459914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.122 [2024-05-15 16:07:26.459916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:29:28.686 16:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:28.686 INFO: Log level set to 20 00:29:28.686 INFO: Requests: 00:29:28.686 { 00:29:28.686 "jsonrpc": "2.0", 00:29:28.686 "method": "nvmf_set_config", 00:29:28.686 "id": 1, 00:29:28.686 "params": { 00:29:28.686 "admin_cmd_passthru": { 00:29:28.686 "identify_ctrlr": true 00:29:28.686 } 00:29:28.686 } 00:29:28.686 } 00:29:28.686 00:29:28.686 INFO: response: 00:29:28.686 { 00:29:28.686 "jsonrpc": "2.0", 00:29:28.686 "id": 1, 00:29:28.686 "result": true 00:29:28.686 } 00:29:28.686 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.686 16:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:28.686 INFO: Setting log level to 20 00:29:28.686 INFO: Setting log level to 20 00:29:28.686 INFO: Log level set to 20 00:29:28.686 INFO: Log level set to 20 00:29:28.686 INFO: Requests: 00:29:28.686 { 00:29:28.686 "jsonrpc": "2.0", 00:29:28.686 "method": "framework_start_init", 00:29:28.686 "id": 1 00:29:28.686 } 00:29:28.686 00:29:28.686 INFO: Requests: 00:29:28.686 { 00:29:28.686 "jsonrpc": "2.0", 00:29:28.686 "method": "framework_start_init", 00:29:28.686 "id": 1 00:29:28.686 } 00:29:28.686 00:29:28.686 [2024-05-15 16:07:27.199707] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:28.686 INFO: response: 00:29:28.686 { 00:29:28.686 "jsonrpc": "2.0", 00:29:28.686 "id": 1, 00:29:28.686 "result": true 00:29:28.686 } 00:29:28.686 00:29:28.686 INFO: response: 00:29:28.686 { 00:29:28.686 "jsonrpc": "2.0", 00:29:28.686 "id": 1, 00:29:28.686 "result": true 00:29:28.686 } 00:29:28.686 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.686 16:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:28.686 INFO: Setting log level to 40 00:29:28.686 INFO: Setting log level to 40 00:29:28.686 INFO: Setting log level to 40 00:29:28.686 [2024-05-15 16:07:27.213123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.686 16:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.686 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:28.943 16:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:29:28.943 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.943 16:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:32.220 Nvme0n1 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:32.220 [2024-05-15 16:07:30.139889] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:32.220 [2024-05-15 16:07:30.140179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:32.220 [ 00:29:32.220 { 00:29:32.220 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:32.220 "subtype": "Discovery", 00:29:32.220 "listen_addresses": [], 00:29:32.220 "allow_any_host": true, 00:29:32.220 "hosts": [] 00:29:32.220 }, 00:29:32.220 { 00:29:32.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.220 "subtype": "NVMe", 00:29:32.220 "listen_addresses": [ 00:29:32.220 { 00:29:32.220 "trtype": "TCP", 00:29:32.220 "adrfam": "IPv4", 00:29:32.220 "traddr": "10.0.0.2", 00:29:32.220 "trsvcid": "4420" 00:29:32.220 } 00:29:32.220 ], 00:29:32.220 "allow_any_host": true, 00:29:32.220 "hosts": [], 00:29:32.220 "serial_number": "SPDK00000000000001", 00:29:32.220 "model_number": "SPDK bdev Controller", 00:29:32.220 "max_namespaces": 1, 00:29:32.220 "min_cntlid": 1, 00:29:32.220 "max_cntlid": 65519, 00:29:32.220 "namespaces": [ 00:29:32.220 { 00:29:32.220 "nsid": 1, 00:29:32.220 "bdev_name": "Nvme0n1", 00:29:32.220 "name": "Nvme0n1", 00:29:32.220 "nguid": "53D85B0A0F5249DDB6F9F4AD7B7686D9", 00:29:32.220 "uuid": "53d85b0a-0f52-49dd-b6f9-f4ad7b7686d9" 00:29:32.220 } 00:29:32.220 ] 00:29:32.220 } 00:29:32.220 ] 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:32.220 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:32.220 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:32.220 16:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:32.220 rmmod nvme_tcp 00:29:32.220 rmmod nvme_fabrics 00:29:32.220 rmmod nvme_keyring 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3933340 ']' 00:29:32.220 16:07:30 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3933340 00:29:32.220 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3933340 ']' 00:29:32.221 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3933340 00:29:32.221 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:29:32.221 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:32.221 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3933340 00:29:32.479 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:32.479 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:32.479 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3933340' 00:29:32.479 killing process with pid 3933340 00:29:32.479 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3933340 00:29:32.479 [2024-05-15 16:07:30.794809] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:32.479 16:07:30 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3933340 00:29:34.381 16:07:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:34.381 16:07:32 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:34.381 16:07:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:34.381 16:07:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:34.381 16:07:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:34.381 16:07:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.381 16:07:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:34.381 16:07:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.953 16:07:34 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:36.953 00:29:36.953 real 0m25.067s 00:29:36.953 user 0m33.673s 00:29:36.953 sys 0m6.554s 00:29:36.953 16:07:34 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:36.953 16:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:36.953 ************************************ 00:29:36.953 END TEST nvmf_identify_passthru 00:29:36.953 ************************************ 00:29:36.953 16:07:34 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:36.953 16:07:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:36.953 16:07:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:36.953 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:29:36.953 ************************************ 00:29:36.953 START TEST nvmf_dif 00:29:36.953 ************************************ 00:29:36.953 16:07:35 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:36.953 * Looking for test storage... 00:29:36.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:36.953 16:07:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.953 16:07:35 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.953 16:07:35 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.953 16:07:35 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.953 16:07:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.953 16:07:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.953 16:07:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.953 16:07:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:36.953 16:07:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.953 16:07:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:36.953 16:07:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:36.953 16:07:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:36.953 16:07:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:36.953 16:07:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.953 16:07:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:36.953 16:07:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:36.953 16:07:35 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:36.953 16:07:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:43.512 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:43.512 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:43.512 Found net devices under 0000:af:00.0: cvl_0_0 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:43.512 Found net devices under 0000:af:00.1: cvl_0_1 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:43.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:29:43.512 00:29:43.512 --- 10.0.0.2 ping statistics --- 00:29:43.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.512 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:29:43.512 16:07:41 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:29:43.512 00:29:43.512 --- 10.0.0.1 ping statistics --- 00:29:43.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.512 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:29:43.512 16:07:42 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.512 16:07:42 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:43.512 16:07:42 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:43.512 16:07:42 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:46.040 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:46.040 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:46.298 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:46.298 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.298 16:07:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:46.298 16:07:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.298 16:07:44 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:46.298 16:07:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3939386 00:29:46.298 16:07:44 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3939386 00:29:46.298 16:07:44 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3939386 ']' 00:29:46.298 16:07:44 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.298 16:07:44 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:46.298 16:07:44 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.299 16:07:44 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:46.299 16:07:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:46.299 [2024-05-15 16:07:44.813463] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:29:46.299 [2024-05-15 16:07:44.813513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.299 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.556 [2024-05-15 16:07:44.888488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.556 [2024-05-15 16:07:44.961658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.556 [2024-05-15 16:07:44.961692] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.556 [2024-05-15 16:07:44.961702] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.556 [2024-05-15 16:07:44.961711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.556 [2024-05-15 16:07:44.961718] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.556 [2024-05-15 16:07:44.961746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:29:47.122 16:07:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.122 16:07:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.122 16:07:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:47.122 16:07:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.122 [2024-05-15 16:07:45.660115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.122 16:07:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:47.122 16:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.380 ************************************ 00:29:47.380 START TEST fio_dif_1_default 00:29:47.380 ************************************ 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.380 bdev_null0 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.380 [2024-05-15 16:07:45.748303] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:47.380 [2024-05-15 16:07:45.748506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:47.380 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:47.381 { 00:29:47.381 "params": { 00:29:47.381 "name": "Nvme$subsystem", 00:29:47.381 "trtype": "$TEST_TRANSPORT", 00:29:47.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:47.381 "adrfam": "ipv4", 00:29:47.381 "trsvcid": "$NVMF_PORT", 00:29:47.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:47.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:47.381 "hdgst": ${hdgst:-false}, 00:29:47.381 "ddgst": ${ddgst:-false} 00:29:47.381 }, 00:29:47.381 "method": "bdev_nvme_attach_controller" 00:29:47.381 } 00:29:47.381 EOF 00:29:47.381 )") 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:47.381 "params": { 00:29:47.381 "name": "Nvme0", 00:29:47.381 "trtype": "tcp", 00:29:47.381 "traddr": "10.0.0.2", 00:29:47.381 "adrfam": "ipv4", 00:29:47.381 "trsvcid": "4420", 00:29:47.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:47.381 "hdgst": false, 00:29:47.381 "ddgst": false 00:29:47.381 }, 00:29:47.381 "method": "bdev_nvme_attach_controller" 00:29:47.381 }' 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:47.381 16:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.638 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:47.638 fio-3.35 00:29:47.638 Starting 1 thread 00:29:47.638 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.823 00:29:59.823 filename0: (groupid=0, jobs=1): err= 0: pid=3939823: Wed May 15 16:07:56 2024 00:29:59.823 read: IOPS=96, BW=385KiB/s (395kB/s)(3856KiB/10005msec) 00:29:59.823 slat (nsec): min=3895, max=53726, avg=6044.75, stdev=1860.81 00:29:59.823 clat (usec): min=40763, max=47326, avg=41494.65, stdev=636.07 00:29:59.823 lat (usec): min=40769, max=47345, avg=41500.70, stdev=636.12 00:29:59.823 clat percentiles (usec): 00:29:59.823 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:59.823 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:29:59.823 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:59.823 | 99.00th=[42730], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:29:59.823 | 99.99th=[47449] 00:29:59.824 bw ( KiB/s): min= 352, max= 416, per=99.89%, avg=385.68, stdev=12.95, samples=19 00:29:59.824 iops : min= 88, max= 104, avg=96.42, stdev= 3.24, samples=19 00:29:59.824 lat (msec) : 50=100.00% 00:29:59.824 cpu : usr=85.40%, sys=14.34%, ctx=58, majf=0, minf=240 00:29:59.824 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.824 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.824 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:59.824 00:29:59.824 Run status group 0 (all jobs): 00:29:59.824 READ: bw=385KiB/s (395kB/s), 385KiB/s-385KiB/s (395kB/s-395kB/s), io=3856KiB (3949kB), run=10005-10005msec 00:29:59.824 16:07:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:59.824 16:07:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 00:29:59.824 real 0m11.308s 00:29:59.824 user 0m17.051s 00:29:59.824 sys 0m1.808s 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 ************************************ 00:29:59.824 END TEST fio_dif_1_default 00:29:59.824 ************************************ 00:29:59.824 16:07:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:59.824 16:07:57 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:59.824 16:07:57 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 ************************************ 00:29:59.824 START TEST fio_dif_1_multi_subsystems 00:29:59.824 ************************************ 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 bdev_null0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 [2024-05-15 16:07:57.124488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 bdev_null1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:59.824 { 00:29:59.824 "params": { 00:29:59.824 "name": "Nvme$subsystem", 00:29:59.824 "trtype": "$TEST_TRANSPORT", 00:29:59.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.824 "adrfam": "ipv4", 00:29:59.824 "trsvcid": "$NVMF_PORT", 00:29:59.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.824 "hdgst": ${hdgst:-false}, 00:29:59.824 "ddgst": ${ddgst:-false} 00:29:59.824 }, 00:29:59.824 "method": "bdev_nvme_attach_controller" 00:29:59.824 } 00:29:59.824 EOF 00:29:59.824 )") 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:59.824 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:59.824 { 00:29:59.824 "params": { 00:29:59.824 "name": "Nvme$subsystem", 00:29:59.824 "trtype": "$TEST_TRANSPORT", 00:29:59.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.824 "adrfam": "ipv4", 00:29:59.824 "trsvcid": "$NVMF_PORT", 00:29:59.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.824 "hdgst": ${hdgst:-false}, 00:29:59.825 "ddgst": ${ddgst:-false} 00:29:59.825 }, 00:29:59.825 "method": "bdev_nvme_attach_controller" 00:29:59.825 } 00:29:59.825 EOF 00:29:59.825 )") 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:59.825 "params": { 00:29:59.825 "name": "Nvme0", 00:29:59.825 "trtype": "tcp", 00:29:59.825 "traddr": "10.0.0.2", 00:29:59.825 "adrfam": "ipv4", 00:29:59.825 "trsvcid": "4420", 00:29:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.825 "hdgst": false, 00:29:59.825 "ddgst": false 00:29:59.825 }, 00:29:59.825 "method": "bdev_nvme_attach_controller" 00:29:59.825 },{ 00:29:59.825 "params": { 00:29:59.825 "name": "Nvme1", 00:29:59.825 "trtype": "tcp", 00:29:59.825 "traddr": "10.0.0.2", 00:29:59.825 "adrfam": "ipv4", 00:29:59.825 "trsvcid": "4420", 00:29:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:59.825 "hdgst": false, 00:29:59.825 "ddgst": false 00:29:59.825 }, 00:29:59.825 "method": "bdev_nvme_attach_controller" 00:29:59.825 }' 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:59.825 16:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.825 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:59.825 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:59.825 fio-3.35 00:29:59.825 Starting 2 threads 00:29:59.825 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.010 00:30:12.010 filename0: (groupid=0, jobs=1): err= 0: pid=3941835: Wed May 15 16:08:08 2024 00:30:12.010 read: IOPS=181, BW=724KiB/s (742kB/s)(7264KiB/10030msec) 00:30:12.010 slat (nsec): min=5790, max=28920, avg=6917.15, stdev=2083.78 00:30:12.010 clat (usec): min=1283, max=44084, avg=22070.93, stdev=20419.03 00:30:12.010 lat (usec): min=1289, max=44109, avg=22077.85, stdev=20418.46 00:30:12.010 clat percentiles (usec): 00:30:12.010 | 1.00th=[ 1500], 5.00th=[ 1516], 10.00th=[ 1532], 20.00th=[ 1549], 00:30:12.010 | 30.00th=[ 1565], 40.00th=[ 1582], 50.00th=[41681], 60.00th=[42206], 00:30:12.010 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:30:12.010 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:30:12.010 | 99.99th=[44303] 00:30:12.010 bw ( KiB/s): min= 672, max= 768, per=65.59%, avg=724.80, stdev=31.62, samples=20 00:30:12.010 iops : min= 168, max= 192, avg=181.20, stdev= 7.90, samples=20 00:30:12.010 lat (msec) : 2=49.78%, 50=50.22% 00:30:12.010 cpu : usr=93.40%, sys=6.34%, ctx=10, majf=0, minf=157 00:30:12.010 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.010 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:12.010 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:12.010 filename1: (groupid=0, jobs=1): err= 0: pid=3941836: Wed May 15 16:08:08 2024 00:30:12.010 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10003msec) 00:30:12.010 slat (nsec): min=5799, max=29704, avg=7526.76, stdev=2498.62 00:30:12.010 clat (usec): min=41701, max=44031, avg=42003.25, stdev=190.84 00:30:12.010 lat (usec): min=41707, max=44060, avg=42010.77, stdev=191.26 00:30:12.010 clat percentiles (usec): 00:30:12.010 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:30:12.010 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:12.010 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:12.010 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:30:12.010 | 99.99th=[43779] 00:30:12.010 bw ( KiB/s): min= 352, max= 384, per=34.42%, avg=380.63, stdev=10.09, samples=19 00:30:12.010 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:30:12.010 lat (msec) : 50=100.00% 00:30:12.010 cpu : usr=93.48%, sys=6.26%, ctx=12, majf=0, minf=37 00:30:12.010 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.010 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:12.010 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:12.010 00:30:12.010 Run status group 0 (all jobs): 00:30:12.010 READ: bw=1104KiB/s (1130kB/s), 381KiB/s-724KiB/s (390kB/s-742kB/s), io=10.8MiB (11.3MB), run=10003-10030msec 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.010 00:30:12.010 real 0m11.559s 00:30:12.010 user 0m28.722s 00:30:12.010 sys 0m1.626s 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:12.010 16:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.010 ************************************ 00:30:12.010 END TEST fio_dif_1_multi_subsystems 00:30:12.010 ************************************ 00:30:12.010 16:08:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:12.010 16:08:08 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:12.010 16:08:08 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:12.010 16:08:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.010 ************************************ 00:30:12.010 START TEST fio_dif_rand_params 00:30:12.010 ************************************ 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:12.010 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:12.011 bdev_null0 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:12.011 [2024-05-15 16:08:08.772634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.011 { 00:30:12.011 "params": { 00:30:12.011 "name": "Nvme$subsystem", 00:30:12.011 "trtype": "$TEST_TRANSPORT", 00:30:12.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.011 "adrfam": "ipv4", 00:30:12.011 "trsvcid": "$NVMF_PORT", 00:30:12.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.011 "hdgst": ${hdgst:-false}, 00:30:12.011 "ddgst": ${ddgst:-false} 00:30:12.011 }, 00:30:12.011 "method": "bdev_nvme_attach_controller" 00:30:12.011 } 00:30:12.011 EOF 00:30:12.011 )") 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:12.011 "params": { 00:30:12.011 "name": "Nvme0", 00:30:12.011 "trtype": "tcp", 00:30:12.011 "traddr": "10.0.0.2", 00:30:12.011 "adrfam": "ipv4", 00:30:12.011 "trsvcid": "4420", 00:30:12.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:12.011 "hdgst": false, 00:30:12.011 "ddgst": false 00:30:12.011 }, 00:30:12.011 "method": "bdev_nvme_attach_controller" 00:30:12.011 }' 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:12.011 16:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:12.011 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:12.011 ... 00:30:12.011 fio-3.35 00:30:12.011 Starting 3 threads 00:30:12.011 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.208 00:30:16.208 filename0: (groupid=0, jobs=1): err= 0: pid=3943844: Wed May 15 16:08:14 2024 00:30:16.208 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(123MiB/5030msec) 00:30:16.208 slat (nsec): min=5956, max=26749, avg=8692.36, stdev=2644.71 00:30:16.208 clat (usec): min=4579, max=95392, avg=15368.28, stdev=16347.27 00:30:16.208 lat (usec): min=4586, max=95403, avg=15376.97, stdev=16347.41 00:30:16.208 clat percentiles (usec): 00:30:16.208 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6849], 00:30:16.208 | 30.00th=[ 7504], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[ 9896], 00:30:16.208 | 70.00th=[10945], 80.00th=[12780], 90.00th=[50070], 95.00th=[52691], 00:30:16.208 | 99.00th=[55837], 99.50th=[91751], 99.90th=[94897], 99.95th=[94897], 00:30:16.208 | 99.99th=[94897] 00:30:16.208 bw ( KiB/s): min=19200, max=36096, per=28.77%, avg=25344.00, stdev=5222.96, samples=9 00:30:16.208 iops : min= 150, max= 282, avg=198.00, stdev=40.80, samples=9 00:30:16.208 lat (msec) : 10=61.06%, 20=23.96%, 50=4.08%, 100=10.91% 00:30:16.208 cpu : usr=92.07%, sys=7.50%, ctx=6, majf=0, minf=73 00:30:16.208 IO depths : 1=3.3%, 2=96.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.208 issued rwts: total=981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.208 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:16.208 filename0: (groupid=0, jobs=1): err= 0: pid=3943845: Wed May 15 16:08:14 2024 00:30:16.208 read: IOPS=288, BW=36.1MiB/s (37.8MB/s)(181MiB/5003msec) 00:30:16.208 slat (nsec): min=3901, max=16025, avg=8560.96, stdev=2220.29 00:30:16.208 clat (usec): min=3997, max=95998, avg=10381.50, stdev=11404.58 00:30:16.208 lat (usec): min=4003, max=96008, avg=10390.07, stdev=11404.93 00:30:16.208 clat percentiles (usec): 00:30:16.208 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 5145], 20.00th=[ 5538], 00:30:16.208 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7701], 00:30:16.208 | 70.00th=[ 8586], 80.00th=[ 9765], 90.00th=[12256], 95.00th=[48497], 00:30:16.208 | 99.00th=[52691], 99.50th=[54789], 99.90th=[95945], 99.95th=[95945], 00:30:16.208 | 99.99th=[95945] 00:30:16.208 bw ( KiB/s): min=23040, max=59648, per=41.87%, avg=36889.60, stdev=10807.69, samples=10 00:30:16.208 iops : min= 180, max= 466, avg=288.20, stdev=84.44, samples=10 00:30:16.208 lat (msec) : 4=0.07%, 10=81.86%, 20=11.15%, 50=3.74%, 100=3.19% 00:30:16.208 cpu : usr=91.84%, sys=7.70%, ctx=6, majf=0, minf=40 00:30:16.208 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.208 issued rwts: total=1444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.208 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:16.208 filename0: (groupid=0, jobs=1): err= 0: pid=3943846: Wed May 15 16:08:14 2024 00:30:16.208 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(130MiB/5020msec) 00:30:16.208 slat (nsec): min=5969, max=30253, avg=8893.86, stdev=2598.14 00:30:16.208 clat (usec): min=4075, max=92015, avg=14508.76, stdev=15864.34 00:30:16.208 lat (usec): min=4082, max=92022, avg=14517.66, stdev=15864.60 00:30:16.208 clat percentiles (usec): 00:30:16.208 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6325], 00:30:16.208 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8586], 60.00th=[ 9503], 00:30:16.208 | 70.00th=[10421], 80.00th=[12518], 90.00th=[49021], 95.00th=[52167], 00:30:16.208 | 99.00th=[56361], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:30:16.208 | 99.99th=[91751] 00:30:16.208 bw ( KiB/s): min=14592, max=38400, per=30.05%, avg=26470.40, stdev=6267.44, samples=10 00:30:16.208 iops : min= 114, max= 300, avg=206.80, stdev=48.96, samples=10 00:30:16.208 lat (msec) : 10=65.38%, 20=20.83%, 50=5.59%, 100=8.20% 00:30:16.208 cpu : usr=91.91%, sys=7.63%, ctx=6, majf=0, minf=139 00:30:16.208 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.208 issued rwts: total=1037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.208 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:16.208 00:30:16.208 Run status group 0 (all jobs): 00:30:16.208 READ: bw=86.0MiB/s (90.2MB/s), 24.4MiB/s-36.1MiB/s (25.6MB/s-37.8MB/s), io=433MiB (454MB), run=5003-5030msec 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 bdev_null0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 [2024-05-15 16:08:14.836902] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 bdev_null1 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 bdev_null2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.467 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.468 { 00:30:16.468 "params": { 00:30:16.468 "name": "Nvme$subsystem", 00:30:16.468 "trtype": "$TEST_TRANSPORT", 00:30:16.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.468 "adrfam": "ipv4", 00:30:16.468 "trsvcid": "$NVMF_PORT", 00:30:16.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.468 "hdgst": ${hdgst:-false}, 00:30:16.468 "ddgst": ${ddgst:-false} 00:30:16.468 }, 00:30:16.468 "method": "bdev_nvme_attach_controller" 00:30:16.468 } 00:30:16.468 EOF 00:30:16.468 )") 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.468 { 00:30:16.468 "params": { 00:30:16.468 "name": "Nvme$subsystem", 00:30:16.468 "trtype": "$TEST_TRANSPORT", 00:30:16.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.468 "adrfam": "ipv4", 00:30:16.468 "trsvcid": "$NVMF_PORT", 00:30:16.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.468 "hdgst": ${hdgst:-false}, 00:30:16.468 "ddgst": ${ddgst:-false} 00:30:16.468 }, 00:30:16.468 "method": "bdev_nvme_attach_controller" 00:30:16.468 } 00:30:16.468 EOF 00:30:16.468 )") 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.468 { 00:30:16.468 "params": { 00:30:16.468 "name": "Nvme$subsystem", 00:30:16.468 "trtype": "$TEST_TRANSPORT", 00:30:16.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.468 "adrfam": "ipv4", 00:30:16.468 "trsvcid": "$NVMF_PORT", 00:30:16.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.468 "hdgst": ${hdgst:-false}, 00:30:16.468 "ddgst": ${ddgst:-false} 00:30:16.468 }, 00:30:16.468 "method": "bdev_nvme_attach_controller" 00:30:16.468 } 00:30:16.468 EOF 00:30:16.468 )") 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:16.468 "params": { 00:30:16.468 "name": "Nvme0", 00:30:16.468 "trtype": "tcp", 00:30:16.468 "traddr": "10.0.0.2", 00:30:16.468 "adrfam": "ipv4", 00:30:16.468 "trsvcid": "4420", 00:30:16.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:16.468 "hdgst": false, 00:30:16.468 "ddgst": false 00:30:16.468 }, 00:30:16.468 "method": "bdev_nvme_attach_controller" 00:30:16.468 },{ 00:30:16.468 "params": { 00:30:16.468 "name": "Nvme1", 00:30:16.468 "trtype": "tcp", 00:30:16.468 "traddr": "10.0.0.2", 00:30:16.468 "adrfam": "ipv4", 00:30:16.468 "trsvcid": "4420", 00:30:16.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.468 "hdgst": false, 00:30:16.468 "ddgst": false 00:30:16.468 }, 00:30:16.468 "method": "bdev_nvme_attach_controller" 00:30:16.468 },{ 00:30:16.468 "params": { 00:30:16.468 "name": "Nvme2", 00:30:16.468 "trtype": "tcp", 00:30:16.468 "traddr": "10.0.0.2", 00:30:16.468 "adrfam": "ipv4", 00:30:16.468 "trsvcid": "4420", 00:30:16.468 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:16.468 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:16.468 "hdgst": false, 00:30:16.468 "ddgst": false 00:30:16.468 }, 00:30:16.468 "method": "bdev_nvme_attach_controller" 00:30:16.468 }' 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:16.468 16:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.727 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:16.727 ... 00:30:16.727 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:16.727 ... 00:30:16.727 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:16.727 ... 00:30:16.727 fio-3.35 00:30:16.727 Starting 24 threads 00:30:16.984 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.177 00:30:29.177 filename0: (groupid=0, jobs=1): err= 0: pid=3945056: Wed May 15 16:08:26 2024 00:30:29.177 read: IOPS=648, BW=2595KiB/s (2657kB/s)(25.4MiB/10013msec) 00:30:29.177 slat (nsec): min=6366, max=87302, avg=27032.37, stdev=16375.71 00:30:29.177 clat (usec): min=9575, max=46007, avg=24467.20, stdev=3136.25 00:30:29.177 lat (usec): min=9609, max=46015, avg=24494.23, stdev=3136.78 00:30:29.177 clat percentiles (usec): 00:30:29.177 | 1.00th=[14615], 5.00th=[19006], 10.00th=[22938], 20.00th=[23725], 00:30:29.177 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:30:29.177 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25822], 95.00th=[28443], 00:30:29.177 | 99.00th=[37487], 99.50th=[39584], 99.90th=[45876], 99.95th=[45876], 00:30:29.177 | 99.99th=[45876] 00:30:29.177 bw ( KiB/s): min= 2352, max= 2992, per=4.40%, avg=2592.00, stdev=133.16, samples=20 00:30:29.177 iops : min= 588, max= 748, avg=648.00, stdev=33.29, samples=20 00:30:29.177 lat (msec) : 10=0.15%, 20=5.42%, 50=94.43% 00:30:29.177 cpu : usr=97.37%, sys=1.93%, ctx=319, majf=0, minf=40 00:30:29.177 IO depths : 1=3.2%, 2=6.8%, 4=16.9%, 8=62.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:29.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.177 complete : 0=0.0%, 4=92.6%, 8=2.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.177 issued rwts: total=6496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.177 filename0: (groupid=0, jobs=1): err= 0: pid=3945057: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=654, BW=2619KiB/s (2682kB/s)(25.6MiB/10023msec) 00:30:29.178 slat (nsec): min=6457, max=94143, avg=15589.65, stdev=10051.64 00:30:29.178 clat (usec): min=2342, max=42268, avg=24339.14, stdev=4017.86 00:30:29.178 lat (usec): min=2355, max=42291, avg=24354.73, stdev=4019.40 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[ 5800], 5.00th=[17433], 10.00th=[21890], 20.00th=[23725], 00:30:29.178 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:30:29.178 | 70.00th=[25035], 80.00th=[25560], 90.00th=[26870], 95.00th=[30540], 00:30:29.178 | 99.00th=[34866], 99.50th=[38011], 99.90th=[41157], 99.95th=[42206], 00:30:29.178 | 99.99th=[42206] 00:30:29.178 bw ( KiB/s): min= 2464, max= 3158, per=4.45%, avg=2618.95, stdev=140.21, samples=20 00:30:29.178 iops : min= 616, max= 789, avg=654.70, stdev=34.94, samples=20 00:30:29.178 lat (msec) : 4=0.61%, 10=0.90%, 20=6.57%, 50=91.92% 00:30:29.178 cpu : usr=97.19%, sys=2.37%, ctx=17, majf=0, minf=74 00:30:29.178 IO depths : 1=1.0%, 2=2.2%, 4=9.3%, 8=75.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=6562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.178 filename0: (groupid=0, jobs=1): err= 0: pid=3945058: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=596, BW=2387KiB/s (2444kB/s)(23.3MiB/10016msec) 00:30:29.178 slat (nsec): min=6444, max=77423, avg=20164.61, stdev=12285.06 00:30:29.178 clat (usec): min=8989, max=48581, avg=26686.25, stdev=5163.85 00:30:29.178 lat (usec): min=9008, max=48605, avg=26706.41, stdev=5164.27 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[14615], 5.00th=[18482], 10.00th=[22676], 20.00th=[23725], 00:30:29.178 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25560], 00:30:29.178 | 70.00th=[28181], 80.00th=[31327], 90.00th=[34341], 95.00th=[36963], 00:30:29.178 | 99.00th=[40109], 99.50th=[41157], 99.90th=[45351], 99.95th=[48497], 00:30:29.178 | 99.99th=[48497] 00:30:29.178 bw ( KiB/s): min= 2000, max= 2560, per=4.05%, avg=2386.80, stdev=136.69, samples=20 00:30:29.178 iops : min= 500, max= 640, avg=596.70, stdev=34.17, samples=20 00:30:29.178 lat (msec) : 10=0.02%, 20=6.56%, 50=93.42% 00:30:29.178 cpu : usr=97.19%, sys=2.38%, ctx=27, majf=0, minf=36 00:30:29.178 IO depths : 1=1.0%, 2=2.2%, 4=9.9%, 8=73.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=90.6%, 8=5.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=5976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.178 filename0: (groupid=0, jobs=1): err= 0: pid=3945059: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=606, BW=2427KiB/s (2485kB/s)(23.7MiB/10003msec) 00:30:29.178 slat (nsec): min=5977, max=77733, avg=20272.61, stdev=11734.07 00:30:29.178 clat (usec): min=5688, max=49640, avg=26227.64, stdev=4911.24 00:30:29.178 lat (usec): min=5695, max=49655, avg=26247.91, stdev=4910.14 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[13304], 5.00th=[22152], 10.00th=[23200], 20.00th=[23725], 00:30:29.178 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:30:29.178 | 70.00th=[25822], 80.00th=[30540], 90.00th=[33424], 95.00th=[35914], 00:30:29.178 | 99.00th=[40633], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:30:29.178 | 99.99th=[49546] 00:30:29.178 bw ( KiB/s): min= 2096, max= 2688, per=4.11%, avg=2420.84, stdev=162.68, samples=19 00:30:29.178 iops : min= 524, max= 672, avg=605.21, stdev=40.67, samples=19 00:30:29.178 lat (msec) : 10=0.21%, 20=3.94%, 50=95.85% 00:30:29.178 cpu : usr=97.42%, sys=2.17%, ctx=17, majf=0, minf=34 00:30:29.178 IO depths : 1=2.4%, 2=4.8%, 4=14.1%, 8=67.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=91.5%, 8=4.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=6069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.178 filename0: (groupid=0, jobs=1): err= 0: pid=3945060: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=643, BW=2573KiB/s (2635kB/s)(25.1MiB/10007msec) 00:30:29.178 slat (nsec): min=6417, max=75132, avg=21962.74, stdev=12713.74 00:30:29.178 clat (usec): min=10735, max=46260, avg=24696.16, stdev=3079.47 00:30:29.178 lat (usec): min=10747, max=46281, avg=24718.12, stdev=3079.40 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[15401], 5.00th=[22152], 10.00th=[23200], 20.00th=[23725], 00:30:29.178 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:30:29.178 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26084], 95.00th=[28181], 00:30:29.178 | 99.00th=[39060], 99.50th=[40109], 99.90th=[46400], 99.95th=[46400], 00:30:29.178 | 99.99th=[46400] 00:30:29.178 bw ( KiB/s): min= 2256, max= 2736, per=4.35%, avg=2563.37, stdev=104.45, samples=19 00:30:29.178 iops : min= 564, max= 684, avg=640.84, stdev=26.11, samples=19 00:30:29.178 lat (msec) : 20=4.05%, 50=95.95% 00:30:29.178 cpu : usr=97.62%, sys=1.96%, ctx=14, majf=0, minf=41 00:30:29.178 IO depths : 1=1.3%, 2=6.3%, 4=21.1%, 8=59.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=6438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.178 filename0: (groupid=0, jobs=1): err= 0: pid=3945061: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.7MiB/10013msec) 00:30:29.178 slat (usec): min=6, max=805, avg=27.64, stdev=19.51 00:30:29.178 clat (usec): min=9942, max=44928, avg=26288.83, stdev=4652.39 00:30:29.178 lat (usec): min=9959, max=44957, avg=26316.47, stdev=4650.48 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[14484], 5.00th=[19530], 10.00th=[23200], 20.00th=[23987], 00:30:29.178 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:29.178 | 70.00th=[26346], 80.00th=[30540], 90.00th=[33162], 95.00th=[34866], 00:30:29.178 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[44827], 00:30:29.178 | 99.99th=[44827] 00:30:29.178 bw ( KiB/s): min= 1920, max= 2640, per=4.11%, avg=2418.80, stdev=207.73, samples=20 00:30:29.178 iops : min= 480, max= 660, avg=604.70, stdev=51.93, samples=20 00:30:29.178 lat (msec) : 10=0.07%, 20=5.53%, 50=94.40% 00:30:29.178 cpu : usr=92.47%, sys=3.89%, ctx=178, majf=0, minf=24 00:30:29.178 IO depths : 1=1.4%, 2=2.9%, 4=9.3%, 8=73.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=90.5%, 8=5.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=6056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.178 filename0: (groupid=0, jobs=1): err= 0: pid=3945062: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=645, BW=2582KiB/s (2644kB/s)(25.2MiB/10011msec) 00:30:29.178 slat (nsec): min=6357, max=86608, avg=20916.15, stdev=13725.93 00:30:29.178 clat (usec): min=11009, max=46604, avg=24631.91, stdev=3149.13 00:30:29.178 lat (usec): min=11017, max=46611, avg=24652.83, stdev=3148.58 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[15008], 5.00th=[19530], 10.00th=[22938], 20.00th=[23725], 00:30:29.178 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:30:29.178 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26346], 95.00th=[30016], 00:30:29.178 | 99.00th=[36439], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:30:29.178 | 99.99th=[46400] 00:30:29.178 bw ( KiB/s): min= 2304, max= 2896, per=4.38%, avg=2578.00, stdev=136.90, samples=20 00:30:29.178 iops : min= 576, max= 724, avg=644.50, stdev=34.22, samples=20 00:30:29.178 lat (msec) : 20=5.11%, 50=94.89% 00:30:29.178 cpu : usr=97.30%, sys=2.26%, ctx=16, majf=0, minf=26 00:30:29.178 IO depths : 1=2.7%, 2=5.7%, 4=13.8%, 8=66.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=91.6%, 8=4.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=6461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.178 filename0: (groupid=0, jobs=1): err= 0: pid=3945063: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=601, BW=2404KiB/s (2462kB/s)(23.5MiB/10005msec) 00:30:29.178 slat (nsec): min=6365, max=81970, avg=20304.00, stdev=12222.21 00:30:29.178 clat (usec): min=11068, max=56017, avg=26503.78, stdev=5056.84 00:30:29.178 lat (usec): min=11086, max=56043, avg=26524.09, stdev=5056.18 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[14615], 5.00th=[19792], 10.00th=[22938], 20.00th=[23987], 00:30:29.178 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:29.178 | 70.00th=[26084], 80.00th=[30540], 90.00th=[33817], 95.00th=[36439], 00:30:29.178 | 99.00th=[41157], 99.50th=[42730], 99.90th=[50594], 99.95th=[55837], 00:30:29.178 | 99.99th=[55837] 00:30:29.178 bw ( KiB/s): min= 2144, max= 2608, per=4.07%, avg=2396.21, stdev=113.93, samples=19 00:30:29.178 iops : min= 536, max= 652, avg=599.05, stdev=28.48, samples=19 00:30:29.178 lat (msec) : 20=5.34%, 50=94.40%, 100=0.27% 00:30:29.178 cpu : usr=97.36%, sys=2.21%, ctx=16, majf=0, minf=31 00:30:29.178 IO depths : 1=0.6%, 2=1.2%, 4=8.2%, 8=76.1%, 16=13.9%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=90.2%, 8=6.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=6014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.178 filename1: (groupid=0, jobs=1): err= 0: pid=3945064: Wed May 15 16:08:26 2024 00:30:29.178 read: IOPS=597, BW=2392KiB/s (2449kB/s)(23.4MiB/10003msec) 00:30:29.178 slat (nsec): min=6329, max=81822, avg=20881.80, stdev=12597.45 00:30:29.178 clat (usec): min=4474, max=53133, avg=26649.95, stdev=5308.81 00:30:29.178 lat (usec): min=4481, max=53150, avg=26670.83, stdev=5307.97 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[13829], 5.00th=[19268], 10.00th=[23200], 20.00th=[23987], 00:30:29.178 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:30:29.178 | 70.00th=[26870], 80.00th=[31327], 90.00th=[34341], 95.00th=[35914], 00:30:29.178 | 99.00th=[40109], 99.50th=[43779], 99.90th=[53216], 99.95th=[53216], 00:30:29.178 | 99.99th=[53216] 00:30:29.178 bw ( KiB/s): min= 1920, max= 2504, per=4.03%, avg=2375.58, stdev=127.37, samples=19 00:30:29.178 iops : min= 480, max= 626, avg=593.89, stdev=31.84, samples=19 00:30:29.178 lat (msec) : 10=0.57%, 20=4.53%, 50=94.63%, 100=0.27% 00:30:29.178 cpu : usr=97.27%, sys=2.29%, ctx=17, majf=0, minf=32 00:30:29.178 IO depths : 1=0.6%, 2=1.1%, 4=8.6%, 8=75.8%, 16=14.0%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=90.3%, 8=6.0%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=5981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename1: (groupid=0, jobs=1): err= 0: pid=3945065: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=590, BW=2361KiB/s (2418kB/s)(23.1MiB/10003msec) 00:30:29.179 slat (nsec): min=6402, max=80750, avg=21353.66, stdev=12992.73 00:30:29.179 clat (usec): min=4174, max=46990, avg=26985.46, stdev=5237.16 00:30:29.179 lat (usec): min=4182, max=47015, avg=27006.81, stdev=5235.49 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[14091], 5.00th=[19792], 10.00th=[23200], 20.00th=[23987], 00:30:29.179 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25822], 00:30:29.179 | 70.00th=[29492], 80.00th=[31851], 90.00th=[34341], 95.00th=[36439], 00:30:29.179 | 99.00th=[40633], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:30:29.179 | 99.99th=[46924] 00:30:29.179 bw ( KiB/s): min= 2048, max= 2536, per=3.98%, avg=2343.58, stdev=157.80, samples=19 00:30:29.179 iops : min= 512, max= 634, avg=585.89, stdev=39.45, samples=19 00:30:29.179 lat (msec) : 10=0.29%, 20=4.81%, 50=94.90% 00:30:29.179 cpu : usr=97.33%, sys=2.22%, ctx=20, majf=0, minf=24 00:30:29.179 IO depths : 1=1.1%, 2=2.2%, 4=9.3%, 8=74.3%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 complete : 0=0.0%, 4=90.4%, 8=5.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 issued rwts: total=5905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename1: (groupid=0, jobs=1): err= 0: pid=3945066: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=612, BW=2448KiB/s (2507kB/s)(24.0MiB/10032msec) 00:30:29.179 slat (nsec): min=6500, max=89787, avg=24492.59, stdev=16182.98 00:30:29.179 clat (usec): min=2991, max=47693, avg=25994.92, stdev=5234.86 00:30:29.179 lat (usec): min=3000, max=47720, avg=26019.41, stdev=5235.47 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[ 5538], 5.00th=[18744], 10.00th=[22938], 20.00th=[23725], 00:30:29.179 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:29.179 | 70.00th=[25822], 80.00th=[29754], 90.00th=[32637], 95.00th=[35390], 00:30:29.179 | 99.00th=[41157], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:30:29.179 | 99.99th=[47449] 00:30:29.179 bw ( KiB/s): min= 2224, max= 2976, per=4.16%, avg=2449.60, stdev=151.76, samples=20 00:30:29.179 iops : min= 556, max= 744, avg=612.40, stdev=37.94, samples=20 00:30:29.179 lat (msec) : 4=0.49%, 10=0.81%, 20=5.10%, 50=93.60% 00:30:29.179 cpu : usr=96.68%, sys=2.46%, ctx=127, majf=0, minf=23 00:30:29.179 IO depths : 1=0.7%, 2=1.5%, 4=8.6%, 8=76.2%, 16=13.0%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 issued rwts: total=6140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename1: (groupid=0, jobs=1): err= 0: pid=3945067: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=586, BW=2346KiB/s (2402kB/s)(22.9MiB/10010msec) 00:30:29.179 slat (nsec): min=6408, max=85708, avg=20501.17, stdev=12738.94 00:30:29.179 clat (usec): min=7553, max=55073, avg=27163.45, stdev=5323.67 00:30:29.179 lat (usec): min=7568, max=55094, avg=27183.95, stdev=5323.00 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[14091], 5.00th=[21103], 10.00th=[23200], 20.00th=[23987], 00:30:29.179 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:30:29.179 | 70.00th=[29492], 80.00th=[31851], 90.00th=[34866], 95.00th=[36963], 00:30:29.179 | 99.00th=[42206], 99.50th=[42730], 99.90th=[49021], 99.95th=[54789], 00:30:29.179 | 99.99th=[55313] 00:30:29.179 bw ( KiB/s): min= 1960, max= 2464, per=3.97%, avg=2338.53, stdev=123.85, samples=19 00:30:29.179 iops : min= 490, max= 616, avg=584.63, stdev=30.96, samples=19 00:30:29.179 lat (msec) : 10=0.02%, 20=4.33%, 50=95.57%, 100=0.09% 00:30:29.179 cpu : usr=97.22%, sys=2.34%, ctx=18, majf=0, minf=35 00:30:29.179 IO depths : 1=0.4%, 2=0.8%, 4=8.6%, 8=76.6%, 16=13.7%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 issued rwts: total=5870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename1: (groupid=0, jobs=1): err= 0: pid=3945068: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.7MiB/10011msec) 00:30:29.179 slat (nsec): min=6379, max=76638, avg=19466.36, stdev=11804.19 00:30:29.179 clat (usec): min=10147, max=45807, avg=26346.88, stdev=4993.30 00:30:29.179 lat (usec): min=10160, max=45821, avg=26366.34, stdev=4993.31 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[14353], 5.00th=[19006], 10.00th=[22414], 20.00th=[23725], 00:30:29.179 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:30:29.179 | 70.00th=[26346], 80.00th=[30802], 90.00th=[34341], 95.00th=[36439], 00:30:29.179 | 99.00th=[40109], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:30:29.179 | 99.99th=[45876] 00:30:29.179 bw ( KiB/s): min= 2176, max= 2640, per=4.10%, avg=2415.60, stdev=106.08, samples=20 00:30:29.179 iops : min= 544, max= 660, avg=603.90, stdev=26.52, samples=20 00:30:29.179 lat (msec) : 20=5.78%, 50=94.22% 00:30:29.179 cpu : usr=97.04%, sys=2.53%, ctx=16, majf=0, minf=27 00:30:29.179 IO depths : 1=0.7%, 2=1.5%, 4=8.6%, 8=75.5%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 complete : 0=0.0%, 4=90.3%, 8=6.0%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 issued rwts: total=6055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename1: (groupid=0, jobs=1): err= 0: pid=3945069: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=606, BW=2427KiB/s (2485kB/s)(23.7MiB/10013msec) 00:30:29.179 slat (nsec): min=6406, max=76167, avg=18659.00, stdev=11362.94 00:30:29.179 clat (usec): min=12506, max=49197, avg=26262.06, stdev=4735.16 00:30:29.179 lat (usec): min=12518, max=49230, avg=26280.71, stdev=4735.61 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[15139], 5.00th=[19792], 10.00th=[22938], 20.00th=[23725], 00:30:29.179 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:30:29.179 | 70.00th=[26084], 80.00th=[30016], 90.00th=[33162], 95.00th=[35390], 00:30:29.179 | 99.00th=[40633], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:30:29.179 | 99.99th=[49021] 00:30:29.179 bw ( KiB/s): min= 2056, max= 2600, per=4.12%, avg=2427.60, stdev=133.04, samples=20 00:30:29.179 iops : min= 514, max= 650, avg=606.90, stdev=33.26, samples=20 00:30:29.179 lat (msec) : 20=5.17%, 50=94.83% 00:30:29.179 cpu : usr=97.01%, sys=2.55%, ctx=21, majf=0, minf=30 00:30:29.179 IO depths : 1=0.3%, 2=0.6%, 4=6.5%, 8=78.6%, 16=14.1%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 complete : 0=0.0%, 4=89.9%, 8=6.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 issued rwts: total=6075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename1: (groupid=0, jobs=1): err= 0: pid=3945070: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=636, BW=2545KiB/s (2606kB/s)(24.9MiB/10004msec) 00:30:29.179 slat (nsec): min=5788, max=78705, avg=22093.66, stdev=12347.30 00:30:29.179 clat (usec): min=5564, max=45418, avg=24980.35, stdev=3899.21 00:30:29.179 lat (usec): min=5572, max=45432, avg=25002.45, stdev=3899.47 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[12911], 5.00th=[19268], 10.00th=[22938], 20.00th=[23725], 00:30:29.179 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:30:29.179 | 70.00th=[25035], 80.00th=[25560], 90.00th=[28705], 95.00th=[33162], 00:30:29.179 | 99.00th=[39584], 99.50th=[40633], 99.90th=[44303], 99.95th=[45351], 00:30:29.179 | 99.99th=[45351] 00:30:29.179 bw ( KiB/s): min= 2256, max= 2672, per=4.30%, avg=2534.11, stdev=113.59, samples=19 00:30:29.179 iops : min= 564, max= 668, avg=633.53, stdev=28.40, samples=19 00:30:29.179 lat (msec) : 10=0.35%, 20=5.77%, 50=93.89% 00:30:29.179 cpu : usr=97.62%, sys=1.97%, ctx=16, majf=0, minf=28 00:30:29.179 IO depths : 1=1.4%, 2=5.6%, 4=19.2%, 8=62.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 issued rwts: total=6366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename1: (groupid=0, jobs=1): err= 0: pid=3945071: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=644, BW=2576KiB/s (2638kB/s)(25.2MiB/10010msec) 00:30:29.179 slat (nsec): min=6487, max=68621, avg=16521.17, stdev=9669.53 00:30:29.179 clat (usec): min=8977, max=41365, avg=24706.31, stdev=2349.57 00:30:29.179 lat (usec): min=8985, max=41398, avg=24722.83, stdev=2349.47 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[16581], 5.00th=[22938], 10.00th=[23200], 20.00th=[23725], 00:30:29.179 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:30:29.179 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25822], 95.00th=[26608], 00:30:29.179 | 99.00th=[35390], 99.50th=[36439], 99.90th=[39584], 99.95th=[41157], 00:30:29.179 | 99.99th=[41157] 00:30:29.179 bw ( KiB/s): min= 2360, max= 2688, per=4.37%, avg=2572.40, stdev=96.85, samples=20 00:30:29.179 iops : min= 590, max= 672, avg=643.10, stdev=24.21, samples=20 00:30:29.179 lat (msec) : 10=0.25%, 20=1.54%, 50=98.22% 00:30:29.179 cpu : usr=97.44%, sys=2.13%, ctx=15, majf=0, minf=38 00:30:29.179 IO depths : 1=5.6%, 2=11.2%, 4=22.9%, 8=53.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.179 issued rwts: total=6447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.179 filename2: (groupid=0, jobs=1): err= 0: pid=3945072: Wed May 15 16:08:26 2024 00:30:29.179 read: IOPS=591, BW=2364KiB/s (2421kB/s)(23.1MiB/10004msec) 00:30:29.179 slat (nsec): min=5479, max=76663, avg=20144.14, stdev=12060.63 00:30:29.179 clat (usec): min=8105, max=63391, avg=26957.95, stdev=5321.31 00:30:29.179 lat (usec): min=8118, max=63406, avg=26978.09, stdev=5320.41 00:30:29.179 clat percentiles (usec): 00:30:29.179 | 1.00th=[13960], 5.00th=[20579], 10.00th=[23200], 20.00th=[23987], 00:30:29.179 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25822], 00:30:29.179 | 70.00th=[28181], 80.00th=[31851], 90.00th=[34341], 95.00th=[36963], 00:30:29.179 | 99.00th=[41157], 99.50th=[45351], 99.90th=[51119], 99.95th=[51119], 00:30:29.179 | 99.99th=[63177] 00:30:29.179 bw ( KiB/s): min= 2168, max= 2480, per=4.00%, avg=2356.21, stdev=91.77, samples=19 00:30:29.179 iops : min= 542, max= 620, avg=589.05, stdev=22.94, samples=19 00:30:29.179 lat (msec) : 10=0.03%, 20=4.58%, 50=95.11%, 100=0.27% 00:30:29.179 cpu : usr=97.15%, sys=2.41%, ctx=19, majf=0, minf=26 00:30:29.179 IO depths : 1=0.8%, 2=1.6%, 4=8.9%, 8=75.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:30:29.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=5913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 filename2: (groupid=0, jobs=1): err= 0: pid=3945073: Wed May 15 16:08:26 2024 00:30:29.180 read: IOPS=587, BW=2351KiB/s (2408kB/s)(23.0MiB/10016msec) 00:30:29.180 slat (nsec): min=6315, max=76647, avg=19868.31, stdev=12044.79 00:30:29.180 clat (usec): min=9862, max=49760, avg=27098.55, stdev=5300.00 00:30:29.180 lat (usec): min=9873, max=49781, avg=27118.42, stdev=5300.29 00:30:29.180 clat percentiles (usec): 00:30:29.180 | 1.00th=[14615], 5.00th=[19530], 10.00th=[22938], 20.00th=[23987], 00:30:29.180 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[26084], 00:30:29.180 | 70.00th=[29492], 80.00th=[31589], 90.00th=[34866], 95.00th=[36963], 00:30:29.180 | 99.00th=[41157], 99.50th=[46400], 99.90th=[48497], 99.95th=[49021], 00:30:29.180 | 99.99th=[49546] 00:30:29.180 bw ( KiB/s): min= 2224, max= 2512, per=3.99%, avg=2350.40, stdev=79.09, samples=20 00:30:29.180 iops : min= 556, max= 628, avg=587.60, stdev=19.77, samples=20 00:30:29.180 lat (msec) : 10=0.02%, 20=5.37%, 50=94.62% 00:30:29.180 cpu : usr=97.24%, sys=2.33%, ctx=20, majf=0, minf=30 00:30:29.180 IO depths : 1=0.7%, 2=1.6%, 4=9.3%, 8=75.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:29.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=90.5%, 8=5.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=5888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 filename2: (groupid=0, jobs=1): err= 0: pid=3945074: Wed May 15 16:08:26 2024 00:30:29.180 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:30:29.180 slat (nsec): min=6367, max=77082, avg=18370.29, stdev=10957.39 00:30:29.180 clat (usec): min=10656, max=52738, avg=25712.95, stdev=3847.05 00:30:29.180 lat (usec): min=10686, max=52753, avg=25731.32, stdev=3847.02 00:30:29.180 clat percentiles (usec): 00:30:29.180 | 1.00th=[17433], 5.00th=[22938], 10.00th=[23462], 20.00th=[23987], 00:30:29.180 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:30:29.180 | 70.00th=[25560], 80.00th=[26084], 90.00th=[30802], 95.00th=[33424], 00:30:29.180 | 99.00th=[41681], 99.50th=[43254], 99.90th=[49546], 99.95th=[52691], 00:30:29.180 | 99.99th=[52691] 00:30:29.180 bw ( KiB/s): min= 2256, max= 2584, per=4.21%, avg=2476.40, stdev=80.10, samples=20 00:30:29.180 iops : min= 564, max= 646, avg=619.10, stdev=20.03, samples=20 00:30:29.180 lat (msec) : 20=2.43%, 50=97.50%, 100=0.06% 00:30:29.180 cpu : usr=97.05%, sys=2.50%, ctx=18, majf=0, minf=43 00:30:29.180 IO depths : 1=0.3%, 2=0.8%, 4=6.6%, 8=78.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:29.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=6207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 filename2: (groupid=0, jobs=1): err= 0: pid=3945075: Wed May 15 16:08:26 2024 00:30:29.180 read: IOPS=612, BW=2450KiB/s (2509kB/s)(24.0MiB/10013msec) 00:30:29.180 slat (nsec): min=6368, max=81889, avg=18151.86, stdev=12344.91 00:30:29.180 clat (usec): min=9001, max=47373, avg=26023.57, stdev=3970.37 00:30:29.180 lat (usec): min=9018, max=47380, avg=26041.72, stdev=3969.44 00:30:29.180 clat percentiles (usec): 00:30:29.180 | 1.00th=[17171], 5.00th=[22676], 10.00th=[23462], 20.00th=[23987], 00:30:29.180 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:30:29.180 | 70.00th=[25822], 80.00th=[27395], 90.00th=[32113], 95.00th=[34866], 00:30:29.180 | 99.00th=[39060], 99.50th=[40633], 99.90th=[42206], 99.95th=[44303], 00:30:29.180 | 99.99th=[47449] 00:30:29.180 bw ( KiB/s): min= 2128, max= 2616, per=4.16%, avg=2447.20, stdev=116.94, samples=20 00:30:29.180 iops : min= 532, max= 654, avg=611.80, stdev=29.24, samples=20 00:30:29.180 lat (msec) : 10=0.05%, 20=3.05%, 50=96.90% 00:30:29.180 cpu : usr=97.29%, sys=2.27%, ctx=15, majf=0, minf=30 00:30:29.180 IO depths : 1=0.3%, 2=0.7%, 4=6.7%, 8=78.6%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:29.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=6134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 filename2: (groupid=0, jobs=1): err= 0: pid=3945076: Wed May 15 16:08:26 2024 00:30:29.180 read: IOPS=631, BW=2528KiB/s (2589kB/s)(24.7MiB/10013msec) 00:30:29.180 slat (nsec): min=6369, max=82245, avg=14905.46, stdev=9906.70 00:30:29.180 clat (usec): min=13370, max=44218, avg=25211.79, stdev=4003.90 00:30:29.180 lat (usec): min=13380, max=44232, avg=25226.70, stdev=4004.93 00:30:29.180 clat percentiles (usec): 00:30:29.180 | 1.00th=[15270], 5.00th=[18482], 10.00th=[22676], 20.00th=[23725], 00:30:29.180 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:30:29.180 | 70.00th=[25297], 80.00th=[26084], 90.00th=[30540], 95.00th=[32900], 00:30:29.180 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42730], 99.95th=[44303], 00:30:29.180 | 99.99th=[44303] 00:30:29.180 bw ( KiB/s): min= 2304, max= 2664, per=4.29%, avg=2527.20, stdev=90.10, samples=20 00:30:29.180 iops : min= 576, max= 666, avg=631.80, stdev=22.52, samples=20 00:30:29.180 lat (msec) : 20=7.52%, 50=92.48% 00:30:29.180 cpu : usr=97.53%, sys=2.05%, ctx=17, majf=0, minf=27 00:30:29.180 IO depths : 1=0.8%, 2=3.6%, 4=14.4%, 8=68.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:29.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=6328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 filename2: (groupid=0, jobs=1): err= 0: pid=3945077: Wed May 15 16:08:26 2024 00:30:29.180 read: IOPS=625, BW=2501KiB/s (2561kB/s)(24.4MiB/10007msec) 00:30:29.180 slat (nsec): min=6537, max=81428, avg=21640.08, stdev=11877.07 00:30:29.180 clat (usec): min=11443, max=49537, avg=25423.67, stdev=3629.97 00:30:29.180 lat (usec): min=11466, max=49545, avg=25445.31, stdev=3628.27 00:30:29.180 clat percentiles (usec): 00:30:29.180 | 1.00th=[15664], 5.00th=[22938], 10.00th=[23200], 20.00th=[23725], 00:30:29.180 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:30:29.180 | 70.00th=[25297], 80.00th=[25822], 90.00th=[30802], 95.00th=[33817], 00:30:29.180 | 99.00th=[39584], 99.50th=[40633], 99.90th=[44303], 99.95th=[46924], 00:30:29.180 | 99.99th=[49546] 00:30:29.180 bw ( KiB/s): min= 2176, max= 2688, per=4.22%, avg=2486.11, stdev=157.17, samples=19 00:30:29.180 iops : min= 544, max= 672, avg=621.53, stdev=39.29, samples=19 00:30:29.180 lat (msec) : 20=2.11%, 50=97.89% 00:30:29.180 cpu : usr=97.62%, sys=1.96%, ctx=19, majf=0, minf=33 00:30:29.180 IO depths : 1=2.5%, 2=7.0%, 4=19.9%, 8=60.0%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:29.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=93.0%, 8=1.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=6256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 filename2: (groupid=0, jobs=1): err= 0: pid=3945078: Wed May 15 16:08:26 2024 00:30:29.180 read: IOPS=593, BW=2372KiB/s (2429kB/s)(23.2MiB/10003msec) 00:30:29.180 slat (nsec): min=6421, max=87099, avg=20764.50, stdev=12529.47 00:30:29.180 clat (usec): min=3374, max=53106, avg=26867.99, stdev=5321.32 00:30:29.180 lat (usec): min=3380, max=53128, avg=26888.76, stdev=5319.99 00:30:29.180 clat percentiles (usec): 00:30:29.180 | 1.00th=[13435], 5.00th=[20055], 10.00th=[23200], 20.00th=[23987], 00:30:29.180 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:30:29.180 | 70.00th=[28705], 80.00th=[31589], 90.00th=[34341], 95.00th=[35914], 00:30:29.180 | 99.00th=[40109], 99.50th=[43779], 99.90th=[53216], 99.95th=[53216], 00:30:29.180 | 99.99th=[53216] 00:30:29.180 bw ( KiB/s): min= 2000, max= 2512, per=4.00%, avg=2353.68, stdev=132.25, samples=19 00:30:29.180 iops : min= 500, max= 628, avg=588.42, stdev=33.06, samples=19 00:30:29.180 lat (msec) : 4=0.17%, 10=0.29%, 20=4.37%, 50=94.91%, 100=0.27% 00:30:29.180 cpu : usr=97.40%, sys=2.16%, ctx=17, majf=0, minf=30 00:30:29.180 IO depths : 1=0.8%, 2=1.6%, 4=8.8%, 8=75.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:30:29.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=5932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 filename2: (groupid=0, jobs=1): err= 0: pid=3945079: Wed May 15 16:08:26 2024 00:30:29.180 read: IOPS=613, BW=2456KiB/s (2515kB/s)(24.1MiB/10036msec) 00:30:29.180 slat (usec): min=3, max=813, avg=24.80, stdev=21.68 00:30:29.180 clat (usec): min=3271, max=49611, avg=25905.39, stdev=5808.21 00:30:29.180 lat (usec): min=3278, max=49637, avg=25930.18, stdev=5810.17 00:30:29.180 clat percentiles (usec): 00:30:29.180 | 1.00th=[ 4817], 5.00th=[16581], 10.00th=[20841], 20.00th=[23462], 00:30:29.180 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:30:29.180 | 70.00th=[26084], 80.00th=[30540], 90.00th=[33424], 95.00th=[36439], 00:30:29.180 | 99.00th=[41681], 99.50th=[43779], 99.90th=[49021], 99.95th=[49546], 00:30:29.180 | 99.99th=[49546] 00:30:29.180 bw ( KiB/s): min= 2176, max= 3072, per=4.17%, avg=2458.60, stdev=173.33, samples=20 00:30:29.180 iops : min= 544, max= 768, avg=614.65, stdev=43.33, samples=20 00:30:29.180 lat (msec) : 4=0.67%, 10=0.63%, 20=7.47%, 50=91.24% 00:30:29.180 cpu : usr=90.51%, sys=4.50%, ctx=103, majf=0, minf=32 00:30:29.180 IO depths : 1=0.6%, 2=1.5%, 4=8.3%, 8=76.0%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:29.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 complete : 0=0.0%, 4=90.3%, 8=5.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.180 issued rwts: total=6162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:29.180 00:30:29.180 Run status group 0 (all jobs): 00:30:29.180 READ: bw=57.5MiB/s (60.3MB/s), 2346KiB/s-2619KiB/s (2402kB/s-2682kB/s), io=577MiB (605MB), run=10003-10036msec 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.180 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 bdev_null0 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 [2024-05-15 16:08:26.660156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 bdev_null1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.181 { 00:30:29.181 "params": { 00:30:29.181 "name": "Nvme$subsystem", 00:30:29.181 "trtype": "$TEST_TRANSPORT", 00:30:29.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.181 "adrfam": "ipv4", 00:30:29.181 "trsvcid": "$NVMF_PORT", 00:30:29.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.181 "hdgst": ${hdgst:-false}, 00:30:29.181 "ddgst": ${ddgst:-false} 00:30:29.181 }, 00:30:29.181 "method": "bdev_nvme_attach_controller" 00:30:29.181 } 00:30:29.181 EOF 00:30:29.181 )") 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.181 { 00:30:29.181 "params": { 00:30:29.181 "name": "Nvme$subsystem", 00:30:29.181 "trtype": "$TEST_TRANSPORT", 00:30:29.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.181 "adrfam": "ipv4", 00:30:29.181 "trsvcid": "$NVMF_PORT", 00:30:29.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.181 "hdgst": ${hdgst:-false}, 00:30:29.181 "ddgst": ${ddgst:-false} 00:30:29.181 }, 00:30:29.181 "method": "bdev_nvme_attach_controller" 00:30:29.181 } 00:30:29.181 EOF 00:30:29.181 )") 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:29.181 16:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:29.181 "params": { 00:30:29.181 "name": "Nvme0", 00:30:29.181 "trtype": "tcp", 00:30:29.181 "traddr": "10.0.0.2", 00:30:29.181 "adrfam": "ipv4", 00:30:29.181 "trsvcid": "4420", 00:30:29.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.182 "hdgst": false, 00:30:29.182 "ddgst": false 00:30:29.182 }, 00:30:29.182 "method": "bdev_nvme_attach_controller" 00:30:29.182 },{ 00:30:29.182 "params": { 00:30:29.182 "name": "Nvme1", 00:30:29.182 "trtype": "tcp", 00:30:29.182 "traddr": "10.0.0.2", 00:30:29.182 "adrfam": "ipv4", 00:30:29.182 "trsvcid": "4420", 00:30:29.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:29.182 "hdgst": false, 00:30:29.182 "ddgst": false 00:30:29.182 }, 00:30:29.182 "method": "bdev_nvme_attach_controller" 00:30:29.182 }' 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:29.182 16:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.182 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:29.182 ... 00:30:29.182 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:29.182 ... 00:30:29.182 fio-3.35 00:30:29.182 Starting 4 threads 00:30:29.182 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.440 00:30:34.440 filename0: (groupid=0, jobs=1): err= 0: pid=3947068: Wed May 15 16:08:32 2024 00:30:34.440 read: IOPS=2733, BW=21.4MiB/s (22.4MB/s)(107MiB/5003msec) 00:30:34.440 slat (nsec): min=3875, max=57043, avg=9785.83, stdev=5343.18 00:30:34.440 clat (usec): min=996, max=5479, avg=2901.81, stdev=417.74 00:30:34.440 lat (usec): min=1002, max=5491, avg=2911.60, stdev=417.92 00:30:34.440 clat percentiles (usec): 00:30:34.440 | 1.00th=[ 1893], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2606], 00:30:34.440 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:30:34.440 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3621], 00:30:34.440 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4424], 99.95th=[ 4555], 00:30:34.440 | 99.99th=[ 4752] 00:30:34.440 bw ( KiB/s): min=21232, max=22832, per=25.31%, avg=21876.80, stdev=477.33, samples=10 00:30:34.440 iops : min= 2654, max= 2854, avg=2734.60, stdev=59.67, samples=10 00:30:34.440 lat (usec) : 1000=0.01% 00:30:34.440 lat (msec) : 2=1.95%, 4=96.98%, 10=1.06% 00:30:34.440 cpu : usr=93.94%, sys=5.68%, ctx=9, majf=0, minf=61 00:30:34.440 IO depths : 1=0.1%, 2=0.7%, 4=65.3%, 8=33.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.440 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.440 issued rwts: total=13678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.440 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:34.440 filename0: (groupid=0, jobs=1): err= 0: pid=3947069: Wed May 15 16:08:32 2024 00:30:34.440 read: IOPS=2685, BW=21.0MiB/s (22.0MB/s)(105MiB/5001msec) 00:30:34.440 slat (nsec): min=5859, max=38576, avg=8958.54, stdev=3371.33 00:30:34.440 clat (usec): min=1628, max=44809, avg=2956.34, stdev=1099.40 00:30:34.440 lat (usec): min=1634, max=44821, avg=2965.30, stdev=1099.39 00:30:34.440 clat percentiles (usec): 00:30:34.440 | 1.00th=[ 2040], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2638], 00:30:34.440 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2966], 00:30:34.440 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3621], 00:30:34.440 | 99.00th=[ 4047], 99.50th=[ 4228], 99.90th=[ 7701], 99.95th=[44827], 00:30:34.440 | 99.99th=[44827] 00:30:34.440 bw ( KiB/s): min=19568, max=21936, per=24.82%, avg=21447.11, stdev=731.29, samples=9 00:30:34.440 iops : min= 2446, max= 2742, avg=2680.89, stdev=91.41, samples=9 00:30:34.440 lat (msec) : 2=0.66%, 4=98.21%, 10=1.07%, 50=0.06% 00:30:34.440 cpu : usr=93.50%, sys=6.16%, ctx=7, majf=0, minf=86 00:30:34.440 IO depths : 1=0.1%, 2=0.9%, 4=65.7%, 8=33.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.440 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.440 issued rwts: total=13432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.440 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:34.440 filename1: (groupid=0, jobs=1): err= 0: pid=3947070: Wed May 15 16:08:32 2024 00:30:34.440 read: IOPS=2681, BW=21.0MiB/s (22.0MB/s)(105MiB/5001msec) 00:30:34.440 slat (nsec): min=5929, max=48849, avg=9011.70, stdev=3729.46 00:30:34.440 clat (usec): min=1096, max=49288, avg=2960.64, stdev=1203.80 00:30:34.440 lat (usec): min=1102, max=49313, avg=2969.65, stdev=1203.84 00:30:34.440 clat percentiles (usec): 00:30:34.440 | 1.00th=[ 2040], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2638], 00:30:34.440 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:30:34.440 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3654], 00:30:34.440 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 5014], 99.95th=[49021], 00:30:34.440 | 99.99th=[49021] 00:30:34.440 bw ( KiB/s): min=18917, max=22144, per=24.82%, avg=21450.10, stdev=918.04, samples=10 00:30:34.440 iops : min= 2364, max= 2768, avg=2681.20, stdev=114.95, samples=10 00:30:34.440 lat (msec) : 2=0.78%, 4=97.58%, 10=1.59%, 50=0.06% 00:30:34.441 cpu : usr=93.60%, sys=6.06%, ctx=6, majf=0, minf=96 00:30:34.441 IO depths : 1=0.1%, 2=1.2%, 4=65.4%, 8=33.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.441 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.441 issued rwts: total=13412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.441 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:34.441 filename1: (groupid=0, jobs=1): err= 0: pid=3947071: Wed May 15 16:08:32 2024 00:30:34.441 read: IOPS=2704, BW=21.1MiB/s (22.2MB/s)(106MiB/5001msec) 00:30:34.441 slat (usec): min=5, max=117, avg= 9.20, stdev= 3.96 00:30:34.441 clat (usec): min=1435, max=49984, avg=2935.66, stdev=1215.15 00:30:34.441 lat (usec): min=1442, max=50011, avg=2944.86, stdev=1215.19 00:30:34.441 clat percentiles (usec): 00:30:34.441 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:30:34.441 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:30:34.441 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3621], 00:30:34.441 | 99.00th=[ 4178], 99.50th=[ 4424], 99.90th=[ 5276], 99.95th=[50070], 00:30:34.441 | 99.99th=[50070] 00:30:34.441 bw ( KiB/s): min=19280, max=22528, per=24.99%, avg=21594.67, stdev=944.58, samples=9 00:30:34.441 iops : min= 2410, max= 2816, avg=2699.33, stdev=118.07, samples=9 00:30:34.441 lat (msec) : 2=0.81%, 4=97.66%, 10=1.47%, 50=0.06% 00:30:34.441 cpu : usr=93.78%, sys=5.90%, ctx=7, majf=0, minf=74 00:30:34.441 IO depths : 1=0.1%, 2=1.2%, 4=65.7%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.441 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.441 issued rwts: total=13523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.441 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:34.441 00:30:34.441 Run status group 0 (all jobs): 00:30:34.441 READ: bw=84.4MiB/s (88.5MB/s), 21.0MiB/s-21.4MiB/s (22.0MB/s-22.4MB/s), io=422MiB (443MB), run=5001-5003msec 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.441 16:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.699 00:30:34.699 real 0m24.285s 00:30:34.699 user 4m52.559s 00:30:34.699 sys 0m9.299s 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 ************************************ 00:30:34.699 END TEST fio_dif_rand_params 00:30:34.699 ************************************ 00:30:34.699 16:08:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:34.699 16:08:33 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:34.699 16:08:33 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 ************************************ 00:30:34.699 START TEST fio_dif_digest 00:30:34.699 ************************************ 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 bdev_null0 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:34.699 [2024-05-15 16:08:33.150890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.699 { 00:30:34.699 "params": { 00:30:34.699 "name": "Nvme$subsystem", 00:30:34.699 "trtype": "$TEST_TRANSPORT", 00:30:34.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.699 "adrfam": "ipv4", 00:30:34.699 "trsvcid": "$NVMF_PORT", 00:30:34.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.699 "hdgst": ${hdgst:-false}, 00:30:34.699 "ddgst": ${ddgst:-false} 00:30:34.699 }, 00:30:34.699 "method": "bdev_nvme_attach_controller" 00:30:34.699 } 00:30:34.699 EOF 00:30:34.699 )") 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:34.699 "params": { 00:30:34.699 "name": "Nvme0", 00:30:34.699 "trtype": "tcp", 00:30:34.699 "traddr": "10.0.0.2", 00:30:34.699 "adrfam": "ipv4", 00:30:34.699 "trsvcid": "4420", 00:30:34.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:34.699 "hdgst": true, 00:30:34.699 "ddgst": true 00:30:34.699 }, 00:30:34.699 "method": "bdev_nvme_attach_controller" 00:30:34.699 }' 00:30:34.699 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:34.700 16:08:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.267 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:35.268 ... 00:30:35.268 fio-3.35 00:30:35.268 Starting 3 threads 00:30:35.268 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.446 00:30:47.446 filename0: (groupid=0, jobs=1): err= 0: pid=3948282: Wed May 15 16:08:44 2024 00:30:47.446 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(300MiB/10046msec) 00:30:47.446 slat (nsec): min=6223, max=31871, avg=10703.24, stdev=2156.35 00:30:47.446 clat (usec): min=4997, max=95793, avg=12542.72, stdev=11286.86 00:30:47.446 lat (usec): min=5005, max=95805, avg=12553.42, stdev=11286.90 00:30:47.446 clat percentiles (usec): 00:30:47.446 | 1.00th=[ 5407], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 8094], 00:30:47.446 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10421], 00:30:47.446 | 70.00th=[10814], 80.00th=[11600], 90.00th=[13173], 95.00th=[50594], 00:30:47.446 | 99.00th=[54789], 99.50th=[56886], 99.90th=[93848], 99.95th=[94897], 00:30:47.446 | 99.99th=[95945] 00:30:47.446 bw ( KiB/s): min=21760, max=41216, per=32.45%, avg=30656.00, stdev=5342.13, samples=20 00:30:47.446 iops : min= 170, max= 322, avg=239.50, stdev=41.74, samples=20 00:30:47.446 lat (msec) : 10=52.02%, 20=41.59%, 50=1.17%, 100=5.21% 00:30:47.446 cpu : usr=91.67%, sys=7.96%, ctx=13, majf=0, minf=91 00:30:47.446 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.446 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:47.446 filename0: (groupid=0, jobs=1): err= 0: pid=3948283: Wed May 15 16:08:44 2024 00:30:47.446 read: IOPS=182, BW=22.9MiB/s (24.0MB/s)(230MiB/10048msec) 00:30:47.446 slat (nsec): min=6239, max=25415, avg=11124.33, stdev=1976.93 00:30:47.446 clat (usec): min=6094, max=99623, avg=16363.09, stdev=14666.15 00:30:47.446 lat (usec): min=6105, max=99631, avg=16374.22, stdev=14666.22 00:30:47.446 clat percentiles (msec): 00:30:47.446 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:30:47.446 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:30:47.446 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 53], 95.00th=[ 55], 00:30:47.446 | 99.00th=[ 58], 99.50th=[ 58], 99.90th=[ 100], 99.95th=[ 101], 00:30:47.446 | 99.99th=[ 101] 00:30:47.446 bw ( KiB/s): min=17152, max=30976, per=24.87%, avg=23500.80, stdev=4073.71, samples=20 00:30:47.446 iops : min= 134, max= 242, avg=183.60, stdev=31.83, samples=20 00:30:47.446 lat (msec) : 10=25.14%, 20=63.00%, 50=0.27%, 100=11.59% 00:30:47.446 cpu : usr=92.18%, sys=7.47%, ctx=17, majf=0, minf=97 00:30:47.446 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.446 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:47.446 filename0: (groupid=0, jobs=1): err= 0: pid=3948284: Wed May 15 16:08:44 2024 00:30:47.446 read: IOPS=316, BW=39.6MiB/s (41.5MB/s)(398MiB/10047msec) 00:30:47.446 slat (nsec): min=6205, max=26598, avg=10202.93, stdev=2186.40 00:30:47.446 clat (usec): min=4777, max=58849, avg=9445.91, stdev=5790.43 00:30:47.446 lat (usec): min=4784, max=58856, avg=9456.12, stdev=5790.83 00:30:47.446 clat percentiles (usec): 00:30:47.446 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 6063], 20.00th=[ 6849], 00:30:47.446 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9503], 00:30:47.446 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12256], 00:30:47.446 | 99.00th=[52691], 99.50th=[55313], 99.90th=[56361], 99.95th=[56886], 00:30:47.446 | 99.99th=[58983] 00:30:47.446 bw ( KiB/s): min=26880, max=49408, per=43.08%, avg=40704.00, stdev=5620.35, samples=20 00:30:47.446 iops : min= 210, max= 386, avg=318.00, stdev=43.91, samples=20 00:30:47.446 lat (msec) : 10=67.72%, 20=30.80%, 50=0.09%, 100=1.38% 00:30:47.446 cpu : usr=90.87%, sys=8.67%, ctx=16, majf=0, minf=163 00:30:47.446 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.446 issued rwts: total=3182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:47.446 00:30:47.446 Run status group 0 (all jobs): 00:30:47.446 READ: bw=92.3MiB/s (96.8MB/s), 22.9MiB/s-39.6MiB/s (24.0MB/s-41.5MB/s), io=927MiB (972MB), run=10046-10048msec 00:30:47.446 16:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:47.446 16:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:47.446 16:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:47.446 16:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:47.446 16:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:47.446 16:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.447 00:30:47.447 real 0m11.093s 00:30:47.447 user 0m36.509s 00:30:47.447 sys 0m2.773s 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:47.447 16:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:47.447 ************************************ 00:30:47.447 END TEST fio_dif_digest 00:30:47.447 ************************************ 00:30:47.447 16:08:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:47.447 16:08:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:47.447 rmmod nvme_tcp 00:30:47.447 rmmod nvme_fabrics 00:30:47.447 rmmod nvme_keyring 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3939386 ']' 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3939386 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3939386 ']' 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3939386 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3939386 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3939386' 00:30:47.447 killing process with pid 3939386 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3939386 00:30:47.447 [2024-05-15 16:08:44.371308] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:47.447 16:08:44 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3939386 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:47.447 16:08:44 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:49.388 Waiting for block devices as requested 00:30:49.388 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:49.388 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:49.388 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:49.388 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:49.388 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:49.388 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:49.649 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:49.649 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:49.649 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:49.649 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:49.907 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:49.907 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:49.907 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:50.166 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:50.166 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:50.166 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:50.423 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:50.423 16:08:48 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.423 16:08:48 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.423 16:08:48 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.423 16:08:48 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.423 16:08:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.423 16:08:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:50.423 16:08:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.951 16:08:50 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:52.951 00:30:52.951 real 1m15.930s 00:30:52.951 user 7m13.897s 00:30:52.951 sys 0m29.832s 00:30:52.951 16:08:50 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:52.951 16:08:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.951 ************************************ 00:30:52.951 END TEST nvmf_dif 00:30:52.951 ************************************ 00:30:52.951 16:08:50 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:52.951 16:08:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:52.951 16:08:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:52.951 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:30:52.951 ************************************ 00:30:52.951 START TEST nvmf_abort_qd_sizes 00:30:52.951 ************************************ 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:52.951 * Looking for test storage... 00:30:52.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:52.951 16:08:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:59.503 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:59.503 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:59.503 Found net devices under 0000:af:00.0: cvl_0_0 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:59.503 Found net devices under 0000:af:00.1: cvl_0_1 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:59.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:30:59.503 00:30:59.503 --- 10.0.0.2 ping statistics --- 00:30:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.503 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:30:59.503 00:30:59.503 --- 10.0.0.1 ping statistics --- 00:30:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.503 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:59.503 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:59.504 16:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:02.027 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:02.027 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:03.925 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:03.925 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.925 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:03.925 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:03.925 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.925 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3956442 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3956442 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3956442 ']' 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:03.926 16:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:03.926 [2024-05-15 16:09:02.237607] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:31:03.926 [2024-05-15 16:09:02.237652] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.926 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.926 [2024-05-15 16:09:02.311353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:03.926 [2024-05-15 16:09:02.387652] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.926 [2024-05-15 16:09:02.387690] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.926 [2024-05-15 16:09:02.387700] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.926 [2024-05-15 16:09:02.387708] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.926 [2024-05-15 16:09:02.387716] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.926 [2024-05-15 16:09:02.387803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.926 [2024-05-15 16:09:02.387820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.926 [2024-05-15 16:09:02.387911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:03.926 [2024-05-15 16:09:02.387913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.488 16:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:04.488 16:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:31:04.488 16:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:04.488 16:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.488 16:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:31:04.744 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:04.745 16:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:04.745 ************************************ 00:31:04.745 START TEST spdk_target_abort 00:31:04.745 ************************************ 00:31:04.745 16:09:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:31:04.745 16:09:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:04.745 16:09:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:31:04.745 16:09:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.745 16:09:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.017 spdk_targetn1 00:31:08.017 16:09:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.017 16:09:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.017 16:09:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.017 16:09:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.017 [2024-05-15 16:09:06.000214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.017 [2024-05-15 16:09:06.036246] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:08.017 [2024-05-15 16:09:06.036491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:08.017 16:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:08.017 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.290 Initializing NVMe Controllers 00:31:11.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:11.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:11.290 Initialization complete. Launching workers. 00:31:11.290 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5769, failed: 0 00:31:11.290 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1680, failed to submit 4089 00:31:11.290 success 916, unsuccess 764, failed 0 00:31:11.290 16:09:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:11.290 16:09:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:11.290 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.626 Initializing NVMe Controllers 00:31:14.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:14.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:14.626 Initialization complete. Launching workers. 00:31:14.626 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8756, failed: 0 00:31:14.626 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7508 00:31:14.626 success 314, unsuccess 934, failed 0 00:31:14.626 16:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:14.626 16:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:14.626 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.907 Initializing NVMe Controllers 00:31:17.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:17.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:17.907 Initialization complete. Launching workers. 00:31:17.907 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34554, failed: 0 00:31:17.908 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2650, failed to submit 31904 00:31:17.908 success 698, unsuccess 1952, failed 0 00:31:17.908 16:09:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:17.908 16:09:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.908 16:09:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:17.908 16:09:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.908 16:09:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:17.908 16:09:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.908 16:09:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.282 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.282 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3956442 00:31:19.282 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3956442 ']' 00:31:19.282 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3956442 00:31:19.282 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:31:19.282 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:19.282 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3956442 00:31:19.541 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:19.541 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:19.541 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3956442' 00:31:19.541 killing process with pid 3956442 00:31:19.541 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3956442 00:31:19.541 [2024-05-15 16:09:17.889466] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:19.541 16:09:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3956442 00:31:19.541 00:31:19.541 real 0m14.940s 00:31:19.541 user 0m58.996s 00:31:19.541 sys 0m2.855s 00:31:19.541 16:09:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:19.541 16:09:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.541 ************************************ 00:31:19.541 END TEST spdk_target_abort 00:31:19.541 ************************************ 00:31:19.800 16:09:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:19.800 16:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:19.800 16:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:19.800 16:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.800 ************************************ 00:31:19.800 START TEST kernel_target_abort 00:31:19.800 ************************************ 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:19.800 16:09:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:23.082 Waiting for block devices as requested 00:31:23.082 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:23.082 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:23.340 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:23.340 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:23.340 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:23.599 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:23.599 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:23.599 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:23.857 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:23.857 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:24.116 No valid GPT data, bailing 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:31:24.116 00:31:24.116 Discovery Log Number of Records 2, Generation counter 2 00:31:24.116 =====Discovery Log Entry 0====== 00:31:24.116 trtype: tcp 00:31:24.116 adrfam: ipv4 00:31:24.116 subtype: current discovery subsystem 00:31:24.116 treq: not specified, sq flow control disable supported 00:31:24.116 portid: 1 00:31:24.116 trsvcid: 4420 00:31:24.116 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:24.116 traddr: 10.0.0.1 00:31:24.116 eflags: none 00:31:24.116 sectype: none 00:31:24.116 =====Discovery Log Entry 1====== 00:31:24.116 trtype: tcp 00:31:24.116 adrfam: ipv4 00:31:24.116 subtype: nvme subsystem 00:31:24.116 treq: not specified, sq flow control disable supported 00:31:24.116 portid: 1 00:31:24.116 trsvcid: 4420 00:31:24.116 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:24.116 traddr: 10.0.0.1 00:31:24.116 eflags: none 00:31:24.116 sectype: none 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:24.116 16:09:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.374 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.653 Initializing NVMe Controllers 00:31:27.653 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:27.653 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:27.653 Initialization complete. Launching workers. 00:31:27.653 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54825, failed: 0 00:31:27.653 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 54825, failed to submit 0 00:31:27.653 success 0, unsuccess 54825, failed 0 00:31:27.653 16:09:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:27.653 16:09:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:27.653 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.931 Initializing NVMe Controllers 00:31:30.931 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:30.931 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:30.931 Initialization complete. Launching workers. 00:31:30.931 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103353, failed: 0 00:31:30.931 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25958, failed to submit 77395 00:31:30.931 success 0, unsuccess 25958, failed 0 00:31:30.931 16:09:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:30.931 16:09:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.931 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.457 Initializing NVMe Controllers 00:31:33.457 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:33.457 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:33.457 Initialization complete. Launching workers. 00:31:33.457 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98811, failed: 0 00:31:33.457 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24730, failed to submit 74081 00:31:33.457 success 0, unsuccess 24730, failed 0 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:33.457 16:09:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:36.742 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:36.742 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:38.664 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:38.664 00:31:38.664 real 0m18.630s 00:31:38.664 user 0m6.346s 00:31:38.664 sys 0m6.029s 00:31:38.664 16:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:38.664 16:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:38.664 ************************************ 00:31:38.665 END TEST kernel_target_abort 00:31:38.665 ************************************ 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:38.665 rmmod nvme_tcp 00:31:38.665 rmmod nvme_fabrics 00:31:38.665 rmmod nvme_keyring 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3956442 ']' 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3956442 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3956442 ']' 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3956442 00:31:38.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3956442) - No such process 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3956442 is not found' 00:31:38.665 Process with pid 3956442 is not found 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:38.665 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:41.944 Waiting for block devices as requested 00:31:41.944 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:41.944 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:41.944 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:41.944 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:41.944 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:41.944 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:41.944 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:42.202 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:42.202 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:42.202 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:42.460 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:42.460 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:42.460 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:42.460 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:42.719 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:42.719 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:42.719 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:42.976 16:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:42.976 16:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:42.976 16:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:42.976 16:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:42.976 16:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.976 16:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:42.976 16:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.506 16:09:43 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:45.506 00:31:45.506 real 0m52.453s 00:31:45.506 user 1m9.657s 00:31:45.506 sys 0m18.533s 00:31:45.506 16:09:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:45.506 16:09:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:45.506 ************************************ 00:31:45.506 END TEST nvmf_abort_qd_sizes 00:31:45.506 ************************************ 00:31:45.506 16:09:43 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:45.506 16:09:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:45.506 16:09:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:45.506 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:31:45.506 ************************************ 00:31:45.506 START TEST keyring_file 00:31:45.506 ************************************ 00:31:45.506 16:09:43 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:45.506 * Looking for test storage... 00:31:45.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.506 16:09:43 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.506 16:09:43 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.506 16:09:43 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.506 16:09:43 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.506 16:09:43 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.506 16:09:43 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.506 16:09:43 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:45.506 16:09:43 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nQ9R3DDPjE 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nQ9R3DDPjE 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nQ9R3DDPjE 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nQ9R3DDPjE 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.o1USKL4RuM 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:45.506 16:09:43 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.o1USKL4RuM 00:31:45.506 16:09:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.o1USKL4RuM 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.o1USKL4RuM 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@30 -- # tgtpid=3966338 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:45.506 16:09:43 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3966338 00:31:45.506 16:09:43 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3966338 ']' 00:31:45.506 16:09:43 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.506 16:09:43 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:45.506 16:09:43 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.506 16:09:43 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:45.506 16:09:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:45.506 [2024-05-15 16:09:43.879362] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:31:45.506 [2024-05-15 16:09:43.879416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3966338 ] 00:31:45.506 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.506 [2024-05-15 16:09:43.948312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.506 [2024-05-15 16:09:44.021957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:46.441 16:09:44 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:46.441 [2024-05-15 16:09:44.678986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.441 null0 00:31:46.441 [2024-05-15 16:09:44.710982] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:46.441 [2024-05-15 16:09:44.711029] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:46.441 [2024-05-15 16:09:44.711356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:46.441 [2024-05-15 16:09:44.719020] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.441 16:09:44 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:46.441 [2024-05-15 16:09:44.735058] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:46.441 request: 00:31:46.441 { 00:31:46.441 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.441 "secure_channel": false, 00:31:46.441 "listen_address": { 00:31:46.441 "trtype": "tcp", 00:31:46.441 "traddr": "127.0.0.1", 00:31:46.441 "trsvcid": "4420" 00:31:46.441 }, 00:31:46.441 "method": "nvmf_subsystem_add_listener", 00:31:46.441 "req_id": 1 00:31:46.441 } 00:31:46.441 Got JSON-RPC error response 00:31:46.441 response: 00:31:46.441 { 00:31:46.441 "code": -32602, 00:31:46.441 "message": "Invalid parameters" 00:31:46.441 } 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:46.441 16:09:44 keyring_file -- keyring/file.sh@46 -- # bperfpid=3966379 00:31:46.441 16:09:44 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:46.441 16:09:44 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3966379 /var/tmp/bperf.sock 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3966379 ']' 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:46.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:46.441 16:09:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:46.441 [2024-05-15 16:09:44.789046] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:31:46.441 [2024-05-15 16:09:44.789093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3966379 ] 00:31:46.441 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.441 [2024-05-15 16:09:44.858844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.441 [2024-05-15 16:09:44.931951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.374 16:09:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:47.374 16:09:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:47.374 16:09:45 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:47.374 16:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:47.374 16:09:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.o1USKL4RuM 00:31:47.374 16:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.o1USKL4RuM 00:31:47.631 16:09:45 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:47.631 16:09:45 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:47.631 16:09:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:47.631 16:09:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:47.631 16:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:47.631 16:09:46 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nQ9R3DDPjE == \/\t\m\p\/\t\m\p\.\n\Q\9\R\3\D\D\P\j\E ]] 00:31:47.631 16:09:46 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:47.632 16:09:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:47.632 16:09:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:47.632 16:09:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:47.632 16:09:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:47.889 16:09:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.o1USKL4RuM == \/\t\m\p\/\t\m\p\.\o\1\U\S\K\L\4\R\u\M ]] 00:31:47.889 16:09:46 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:47.889 16:09:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:47.889 16:09:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:47.889 16:09:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:47.889 16:09:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:47.889 16:09:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:48.147 16:09:46 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:48.147 16:09:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:48.147 16:09:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:48.147 16:09:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:48.147 16:09:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.147 16:09:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:48.147 16:09:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.147 16:09:46 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:48.147 16:09:46 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:48.147 16:09:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:48.405 [2024-05-15 16:09:46.808775] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:48.405 nvme0n1 00:31:48.405 16:09:46 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:48.405 16:09:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:48.405 16:09:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:48.405 16:09:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.405 16:09:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.405 16:09:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:48.662 16:09:47 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:48.662 16:09:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:48.662 16:09:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:48.663 16:09:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:48.663 16:09:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.663 16:09:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.663 16:09:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:48.920 16:09:47 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:48.920 16:09:47 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:48.920 Running I/O for 1 seconds... 00:31:49.855 00:31:49.855 Latency(us) 00:31:49.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.855 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:49.855 nvme0n1 : 1.01 8893.00 34.74 0.00 0.00 14343.11 6448.74 22334.67 00:31:49.855 =================================================================================================================== 00:31:49.855 Total : 8893.00 34.74 0.00 0.00 14343.11 6448.74 22334.67 00:31:49.855 0 00:31:49.855 16:09:48 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:49.855 16:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:50.112 16:09:48 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:50.112 16:09:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:50.112 16:09:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:50.112 16:09:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.112 16:09:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:50.112 16:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.370 16:09:48 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:50.370 16:09:48 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:50.370 16:09:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:50.370 16:09:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:50.370 16:09:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.370 16:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.370 16:09:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:50.370 16:09:48 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:50.370 16:09:48 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:50.370 16:09:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:50.370 16:09:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:50.370 16:09:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:50.370 16:09:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:50.370 16:09:48 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:50.370 16:09:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:50.370 16:09:48 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:50.370 16:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:50.627 [2024-05-15 16:09:49.081049] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:50.627 [2024-05-15 16:09:49.081687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c0e0 (107): Transport endpoint is not connected 00:31:50.628 [2024-05-15 16:09:49.082681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c0e0 (9): Bad file descriptor 00:31:50.628 [2024-05-15 16:09:49.083681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:50.628 [2024-05-15 16:09:49.083692] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:50.628 [2024-05-15 16:09:49.083701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:50.628 request: 00:31:50.628 { 00:31:50.628 "name": "nvme0", 00:31:50.628 "trtype": "tcp", 00:31:50.628 "traddr": "127.0.0.1", 00:31:50.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.628 "adrfam": "ipv4", 00:31:50.628 "trsvcid": "4420", 00:31:50.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.628 "psk": "key1", 00:31:50.628 "method": "bdev_nvme_attach_controller", 00:31:50.628 "req_id": 1 00:31:50.628 } 00:31:50.628 Got JSON-RPC error response 00:31:50.628 response: 00:31:50.628 { 00:31:50.628 "code": -32602, 00:31:50.628 "message": "Invalid parameters" 00:31:50.628 } 00:31:50.628 16:09:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:50.628 16:09:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:50.628 16:09:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:50.628 16:09:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:50.628 16:09:49 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:50.628 16:09:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:50.628 16:09:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:50.628 16:09:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.628 16:09:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:50.628 16:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.885 16:09:49 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:50.885 16:09:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:50.885 16:09:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:50.885 16:09:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:50.885 16:09:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.885 16:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.885 16:09:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:51.142 16:09:49 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:51.142 16:09:49 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:51.142 16:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:51.142 16:09:49 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:51.142 16:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:51.399 16:09:49 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:51.400 16:09:49 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:51.400 16:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:51.657 16:09:49 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:51.657 16:09:49 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nQ9R3DDPjE 00:31:51.657 16:09:49 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:51.657 16:09:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:51.657 16:09:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:51.657 16:09:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:51.657 16:09:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:51.657 16:09:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:51.657 16:09:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:51.657 16:09:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:51.657 16:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:51.657 [2024-05-15 16:09:50.176766] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nQ9R3DDPjE': 0100660 00:31:51.657 [2024-05-15 16:09:50.176797] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:51.657 request: 00:31:51.657 { 00:31:51.657 "name": "key0", 00:31:51.657 "path": "/tmp/tmp.nQ9R3DDPjE", 00:31:51.657 "method": "keyring_file_add_key", 00:31:51.657 "req_id": 1 00:31:51.657 } 00:31:51.657 Got JSON-RPC error response 00:31:51.657 response: 00:31:51.657 { 00:31:51.657 "code": -1, 00:31:51.657 "message": "Operation not permitted" 00:31:51.657 } 00:31:51.657 16:09:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:51.657 16:09:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:51.657 16:09:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:51.657 16:09:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:51.657 16:09:50 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nQ9R3DDPjE 00:31:51.657 16:09:50 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:51.657 16:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nQ9R3DDPjE 00:31:51.915 16:09:50 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nQ9R3DDPjE 00:31:51.915 16:09:50 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:51.915 16:09:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:51.915 16:09:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:51.915 16:09:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:51.915 16:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:51.915 16:09:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:52.172 16:09:50 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:52.172 16:09:50 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:52.172 16:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:52.172 [2024-05-15 16:09:50.710194] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nQ9R3DDPjE': No such file or directory 00:31:52.172 [2024-05-15 16:09:50.710219] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:52.172 [2024-05-15 16:09:50.710242] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:52.172 [2024-05-15 16:09:50.710255] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:52.172 [2024-05-15 16:09:50.710263] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:52.172 request: 00:31:52.172 { 00:31:52.172 "name": "nvme0", 00:31:52.172 "trtype": "tcp", 00:31:52.172 "traddr": "127.0.0.1", 00:31:52.172 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:52.172 "adrfam": "ipv4", 00:31:52.172 "trsvcid": "4420", 00:31:52.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.172 "psk": "key0", 00:31:52.172 "method": "bdev_nvme_attach_controller", 00:31:52.172 "req_id": 1 00:31:52.172 } 00:31:52.172 Got JSON-RPC error response 00:31:52.172 response: 00:31:52.172 { 00:31:52.172 "code": -19, 00:31:52.172 "message": "No such device" 00:31:52.172 } 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:52.172 16:09:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:52.173 16:09:50 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:52.173 16:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:52.430 16:09:50 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qvKNAm9RNW 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:52.430 16:09:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:52.430 16:09:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:52.430 16:09:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:52.430 16:09:50 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:52.430 16:09:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:52.430 16:09:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qvKNAm9RNW 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qvKNAm9RNW 00:31:52.430 16:09:50 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.qvKNAm9RNW 00:31:52.430 16:09:50 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qvKNAm9RNW 00:31:52.430 16:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qvKNAm9RNW 00:31:52.688 16:09:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:52.688 16:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:52.945 nvme0n1 00:31:52.945 16:09:51 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:52.945 16:09:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:52.945 16:09:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:52.945 16:09:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:52.945 16:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:52.945 16:09:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:53.203 16:09:51 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:53.203 16:09:51 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:53.203 16:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:53.203 16:09:51 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:53.203 16:09:51 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:53.203 16:09:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.203 16:09:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:53.203 16:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.461 16:09:51 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:53.461 16:09:51 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:53.461 16:09:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:53.461 16:09:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:53.461 16:09:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.461 16:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.461 16:09:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:53.719 16:09:52 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:53.719 16:09:52 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:53.719 16:09:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:53.719 16:09:52 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:53.719 16:09:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.719 16:09:52 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:53.989 16:09:52 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:53.989 16:09:52 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qvKNAm9RNW 00:31:53.989 16:09:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qvKNAm9RNW 00:31:54.276 16:09:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.o1USKL4RuM 00:31:54.276 16:09:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.o1USKL4RuM 00:31:54.276 16:09:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:54.276 16:09:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:54.532 nvme0n1 00:31:54.532 16:09:53 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:54.532 16:09:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:54.790 16:09:53 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:54.790 "subsystems": [ 00:31:54.790 { 00:31:54.790 "subsystem": "keyring", 00:31:54.790 "config": [ 00:31:54.790 { 00:31:54.790 "method": "keyring_file_add_key", 00:31:54.790 "params": { 00:31:54.790 "name": "key0", 00:31:54.790 "path": "/tmp/tmp.qvKNAm9RNW" 00:31:54.790 } 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "method": "keyring_file_add_key", 00:31:54.790 "params": { 00:31:54.790 "name": "key1", 00:31:54.790 "path": "/tmp/tmp.o1USKL4RuM" 00:31:54.790 } 00:31:54.790 } 00:31:54.790 ] 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "subsystem": "iobuf", 00:31:54.790 "config": [ 00:31:54.790 { 00:31:54.790 "method": "iobuf_set_options", 00:31:54.790 "params": { 00:31:54.790 "small_pool_count": 8192, 00:31:54.790 "large_pool_count": 1024, 00:31:54.790 "small_bufsize": 8192, 00:31:54.790 "large_bufsize": 135168 00:31:54.790 } 00:31:54.790 } 00:31:54.790 ] 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "subsystem": "sock", 00:31:54.790 "config": [ 00:31:54.790 { 00:31:54.790 "method": "sock_impl_set_options", 00:31:54.790 "params": { 00:31:54.790 "impl_name": "posix", 00:31:54.790 "recv_buf_size": 2097152, 00:31:54.790 "send_buf_size": 2097152, 00:31:54.790 "enable_recv_pipe": true, 00:31:54.790 "enable_quickack": false, 00:31:54.790 "enable_placement_id": 0, 00:31:54.790 "enable_zerocopy_send_server": true, 00:31:54.790 "enable_zerocopy_send_client": false, 00:31:54.790 "zerocopy_threshold": 0, 00:31:54.790 "tls_version": 0, 00:31:54.790 "enable_ktls": false 00:31:54.790 } 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "method": "sock_impl_set_options", 00:31:54.790 "params": { 00:31:54.790 "impl_name": "ssl", 00:31:54.790 "recv_buf_size": 4096, 00:31:54.790 "send_buf_size": 4096, 00:31:54.790 "enable_recv_pipe": true, 00:31:54.790 "enable_quickack": false, 00:31:54.790 "enable_placement_id": 0, 00:31:54.790 "enable_zerocopy_send_server": true, 00:31:54.790 "enable_zerocopy_send_client": false, 00:31:54.790 "zerocopy_threshold": 0, 00:31:54.790 "tls_version": 0, 00:31:54.790 "enable_ktls": false 00:31:54.790 } 00:31:54.790 } 00:31:54.790 ] 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "subsystem": "vmd", 00:31:54.790 "config": [] 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "subsystem": "accel", 00:31:54.790 "config": [ 00:31:54.790 { 00:31:54.790 "method": "accel_set_options", 00:31:54.790 "params": { 00:31:54.790 "small_cache_size": 128, 00:31:54.790 "large_cache_size": 16, 00:31:54.790 "task_count": 2048, 00:31:54.790 "sequence_count": 2048, 00:31:54.790 "buf_count": 2048 00:31:54.790 } 00:31:54.790 } 00:31:54.790 ] 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "subsystem": "bdev", 00:31:54.790 "config": [ 00:31:54.790 { 00:31:54.790 "method": "bdev_set_options", 00:31:54.790 "params": { 00:31:54.790 "bdev_io_pool_size": 65535, 00:31:54.790 "bdev_io_cache_size": 256, 00:31:54.790 "bdev_auto_examine": true, 00:31:54.790 "iobuf_small_cache_size": 128, 00:31:54.790 "iobuf_large_cache_size": 16 00:31:54.790 } 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "method": "bdev_raid_set_options", 00:31:54.790 "params": { 00:31:54.790 "process_window_size_kb": 1024 00:31:54.790 } 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "method": "bdev_iscsi_set_options", 00:31:54.790 "params": { 00:31:54.790 "timeout_sec": 30 00:31:54.790 } 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "method": "bdev_nvme_set_options", 00:31:54.790 "params": { 00:31:54.790 "action_on_timeout": "none", 00:31:54.790 "timeout_us": 0, 00:31:54.790 "timeout_admin_us": 0, 00:31:54.790 "keep_alive_timeout_ms": 10000, 00:31:54.790 "arbitration_burst": 0, 00:31:54.790 "low_priority_weight": 0, 00:31:54.790 "medium_priority_weight": 0, 00:31:54.790 "high_priority_weight": 0, 00:31:54.790 "nvme_adminq_poll_period_us": 10000, 00:31:54.790 "nvme_ioq_poll_period_us": 0, 00:31:54.790 "io_queue_requests": 512, 00:31:54.790 "delay_cmd_submit": true, 00:31:54.790 "transport_retry_count": 4, 00:31:54.790 "bdev_retry_count": 3, 00:31:54.790 "transport_ack_timeout": 0, 00:31:54.790 "ctrlr_loss_timeout_sec": 0, 00:31:54.790 "reconnect_delay_sec": 0, 00:31:54.790 "fast_io_fail_timeout_sec": 0, 00:31:54.790 "disable_auto_failback": false, 00:31:54.790 "generate_uuids": false, 00:31:54.790 "transport_tos": 0, 00:31:54.790 "nvme_error_stat": false, 00:31:54.790 "rdma_srq_size": 0, 00:31:54.790 "io_path_stat": false, 00:31:54.790 "allow_accel_sequence": false, 00:31:54.790 "rdma_max_cq_size": 0, 00:31:54.790 "rdma_cm_event_timeout_ms": 0, 00:31:54.790 "dhchap_digests": [ 00:31:54.790 "sha256", 00:31:54.790 "sha384", 00:31:54.790 "sha512" 00:31:54.790 ], 00:31:54.790 "dhchap_dhgroups": [ 00:31:54.790 "null", 00:31:54.790 "ffdhe2048", 00:31:54.790 "ffdhe3072", 00:31:54.790 "ffdhe4096", 00:31:54.790 "ffdhe6144", 00:31:54.790 "ffdhe8192" 00:31:54.790 ] 00:31:54.790 } 00:31:54.790 }, 00:31:54.790 { 00:31:54.790 "method": "bdev_nvme_attach_controller", 00:31:54.790 "params": { 00:31:54.790 "name": "nvme0", 00:31:54.790 "trtype": "TCP", 00:31:54.790 "adrfam": "IPv4", 00:31:54.790 "traddr": "127.0.0.1", 00:31:54.790 "trsvcid": "4420", 00:31:54.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.790 "prchk_reftag": false, 00:31:54.790 "prchk_guard": false, 00:31:54.790 "ctrlr_loss_timeout_sec": 0, 00:31:54.790 "reconnect_delay_sec": 0, 00:31:54.790 "fast_io_fail_timeout_sec": 0, 00:31:54.790 "psk": "key0", 00:31:54.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:54.790 "hdgst": false, 00:31:54.790 "ddgst": false 00:31:54.790 } 00:31:54.790 }, 00:31:54.791 { 00:31:54.791 "method": "bdev_nvme_set_hotplug", 00:31:54.791 "params": { 00:31:54.791 "period_us": 100000, 00:31:54.791 "enable": false 00:31:54.791 } 00:31:54.791 }, 00:31:54.791 { 00:31:54.791 "method": "bdev_wait_for_examine" 00:31:54.791 } 00:31:54.791 ] 00:31:54.791 }, 00:31:54.791 { 00:31:54.791 "subsystem": "nbd", 00:31:54.791 "config": [] 00:31:54.791 } 00:31:54.791 ] 00:31:54.791 }' 00:31:54.791 16:09:53 keyring_file -- keyring/file.sh@114 -- # killprocess 3966379 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3966379 ']' 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3966379 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3966379 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3966379' 00:31:54.791 killing process with pid 3966379 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@965 -- # kill 3966379 00:31:54.791 Received shutdown signal, test time was about 1.000000 seconds 00:31:54.791 00:31:54.791 Latency(us) 00:31:54.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.791 =================================================================================================================== 00:31:54.791 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:54.791 16:09:53 keyring_file -- common/autotest_common.sh@970 -- # wait 3966379 00:31:55.049 16:09:53 keyring_file -- keyring/file.sh@117 -- # bperfpid=3968081 00:31:55.049 16:09:53 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3968081 /var/tmp/bperf.sock 00:31:55.049 16:09:53 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3968081 ']' 00:31:55.049 16:09:53 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:55.049 16:09:53 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:55.049 16:09:53 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:55.049 16:09:53 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:55.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:55.049 16:09:53 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:55.049 16:09:53 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:55.049 "subsystems": [ 00:31:55.049 { 00:31:55.049 "subsystem": "keyring", 00:31:55.049 "config": [ 00:31:55.049 { 00:31:55.049 "method": "keyring_file_add_key", 00:31:55.049 "params": { 00:31:55.049 "name": "key0", 00:31:55.049 "path": "/tmp/tmp.qvKNAm9RNW" 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "keyring_file_add_key", 00:31:55.049 "params": { 00:31:55.049 "name": "key1", 00:31:55.049 "path": "/tmp/tmp.o1USKL4RuM" 00:31:55.049 } 00:31:55.049 } 00:31:55.049 ] 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "subsystem": "iobuf", 00:31:55.049 "config": [ 00:31:55.049 { 00:31:55.049 "method": "iobuf_set_options", 00:31:55.049 "params": { 00:31:55.049 "small_pool_count": 8192, 00:31:55.049 "large_pool_count": 1024, 00:31:55.049 "small_bufsize": 8192, 00:31:55.049 "large_bufsize": 135168 00:31:55.049 } 00:31:55.049 } 00:31:55.049 ] 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "subsystem": "sock", 00:31:55.049 "config": [ 00:31:55.049 { 00:31:55.049 "method": "sock_impl_set_options", 00:31:55.049 "params": { 00:31:55.049 "impl_name": "posix", 00:31:55.049 "recv_buf_size": 2097152, 00:31:55.049 "send_buf_size": 2097152, 00:31:55.049 "enable_recv_pipe": true, 00:31:55.049 "enable_quickack": false, 00:31:55.049 "enable_placement_id": 0, 00:31:55.049 "enable_zerocopy_send_server": true, 00:31:55.049 "enable_zerocopy_send_client": false, 00:31:55.049 "zerocopy_threshold": 0, 00:31:55.049 "tls_version": 0, 00:31:55.049 "enable_ktls": false 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "sock_impl_set_options", 00:31:55.049 "params": { 00:31:55.049 "impl_name": "ssl", 00:31:55.049 "recv_buf_size": 4096, 00:31:55.049 "send_buf_size": 4096, 00:31:55.049 "enable_recv_pipe": true, 00:31:55.049 "enable_quickack": false, 00:31:55.049 "enable_placement_id": 0, 00:31:55.049 "enable_zerocopy_send_server": true, 00:31:55.049 "enable_zerocopy_send_client": false, 00:31:55.049 "zerocopy_threshold": 0, 00:31:55.049 "tls_version": 0, 00:31:55.049 "enable_ktls": false 00:31:55.049 } 00:31:55.049 } 00:31:55.049 ] 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "subsystem": "vmd", 00:31:55.049 "config": [] 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "subsystem": "accel", 00:31:55.049 "config": [ 00:31:55.049 { 00:31:55.049 "method": "accel_set_options", 00:31:55.049 "params": { 00:31:55.049 "small_cache_size": 128, 00:31:55.049 "large_cache_size": 16, 00:31:55.049 "task_count": 2048, 00:31:55.049 "sequence_count": 2048, 00:31:55.049 "buf_count": 2048 00:31:55.049 } 00:31:55.049 } 00:31:55.049 ] 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "subsystem": "bdev", 00:31:55.049 "config": [ 00:31:55.049 { 00:31:55.049 "method": "bdev_set_options", 00:31:55.049 "params": { 00:31:55.049 "bdev_io_pool_size": 65535, 00:31:55.049 "bdev_io_cache_size": 256, 00:31:55.049 "bdev_auto_examine": true, 00:31:55.049 "iobuf_small_cache_size": 128, 00:31:55.049 "iobuf_large_cache_size": 16 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "bdev_raid_set_options", 00:31:55.049 "params": { 00:31:55.049 "process_window_size_kb": 1024 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "bdev_iscsi_set_options", 00:31:55.049 "params": { 00:31:55.049 "timeout_sec": 30 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "bdev_nvme_set_options", 00:31:55.049 "params": { 00:31:55.049 "action_on_timeout": "none", 00:31:55.049 "timeout_us": 0, 00:31:55.049 "timeout_admin_us": 0, 00:31:55.049 "keep_alive_timeout_ms": 10000, 00:31:55.049 "arbitration_burst": 0, 00:31:55.049 "low_priority_weight": 0, 00:31:55.049 "medium_priority_weight": 0, 00:31:55.049 "high_priority_weight": 0, 00:31:55.049 "nvme_adminq_poll_period_us": 10000, 00:31:55.049 "nvme_ioq_poll_period_us": 0, 00:31:55.049 "io_queue_requests": 512, 00:31:55.049 "delay_cmd_submit": true, 00:31:55.049 "transport_retry_count": 4, 00:31:55.049 "bdev_retry_count": 3, 00:31:55.049 "transport_ack_timeout": 0, 00:31:55.049 "ctrlr_loss_timeout_sec": 0, 00:31:55.049 "reconnect_delay_sec": 0, 00:31:55.049 "fast_io_fail_timeout_sec": 0, 00:31:55.049 "disable_auto_failback": false, 00:31:55.049 "generate_uuids": false, 00:31:55.049 "transport_tos": 0, 00:31:55.049 "nvme_error_stat": false, 00:31:55.049 "rdma_srq_size": 0, 00:31:55.049 "io_path_stat": false, 00:31:55.049 "allow_accel_sequence": false, 00:31:55.049 "rdma_max_cq_size": 0, 00:31:55.049 "rdma_cm_event_timeout_ms": 0, 00:31:55.049 "dhchap_digests": [ 00:31:55.049 "sha256", 00:31:55.049 "sha384", 00:31:55.049 "sha512" 00:31:55.049 ], 00:31:55.049 "dhchap_dhgroups": [ 00:31:55.049 "null", 00:31:55.049 "ffdhe2048", 00:31:55.049 "ffdhe3072", 00:31:55.049 "ffdhe4096", 00:31:55.049 "ffdhe6144", 00:31:55.049 "ffdhe8192" 00:31:55.049 ] 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "bdev_nvme_attach_controller", 00:31:55.049 "params": { 00:31:55.049 "name": "nvme0", 00:31:55.049 "trtype": "TCP", 00:31:55.049 "adrfam": "IPv4", 00:31:55.049 "traddr": "127.0.0.1", 00:31:55.049 "trsvcid": "4420", 00:31:55.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.049 "prchk_reftag": false, 00:31:55.049 "prchk_guard": false, 00:31:55.049 "ctrlr_loss_timeout_sec": 0, 00:31:55.049 "reconnect_delay_sec": 0, 00:31:55.049 "fast_io_fail_timeout_sec": 0, 00:31:55.049 "psk": "key0", 00:31:55.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:55.049 "hdgst": false, 00:31:55.049 "ddgst": false 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "bdev_nvme_set_hotplug", 00:31:55.049 "params": { 00:31:55.049 "period_us": 100000, 00:31:55.049 "enable": false 00:31:55.049 } 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "method": "bdev_wait_for_examine" 00:31:55.049 } 00:31:55.049 ] 00:31:55.049 }, 00:31:55.049 { 00:31:55.049 "subsystem": "nbd", 00:31:55.049 "config": [] 00:31:55.049 } 00:31:55.049 ] 00:31:55.049 }' 00:31:55.049 16:09:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:55.049 [2024-05-15 16:09:53.578044] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:31:55.049 [2024-05-15 16:09:53.578097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3968081 ] 00:31:55.049 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.307 [2024-05-15 16:09:53.646981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.307 [2024-05-15 16:09:53.719673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.307 [2024-05-15 16:09:53.869722] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:55.873 16:09:54 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:55.873 16:09:54 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:55.873 16:09:54 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:55.873 16:09:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:55.873 16:09:54 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:56.131 16:09:54 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:56.131 16:09:54 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:56.131 16:09:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:56.131 16:09:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.131 16:09:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.131 16:09:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:56.131 16:09:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.389 16:09:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:56.389 16:09:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:56.389 16:09:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:56.389 16:09:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.389 16:09:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.389 16:09:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.389 16:09:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:56.389 16:09:54 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:56.389 16:09:54 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:56.389 16:09:54 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:56.389 16:09:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:56.647 16:09:55 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:56.647 16:09:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:56.647 16:09:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.qvKNAm9RNW /tmp/tmp.o1USKL4RuM 00:31:56.647 16:09:55 keyring_file -- keyring/file.sh@20 -- # killprocess 3968081 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3968081 ']' 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3968081 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3968081 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3968081' 00:31:56.647 killing process with pid 3968081 00:31:56.647 16:09:55 keyring_file -- common/autotest_common.sh@965 -- # kill 3968081 00:31:56.647 Received shutdown signal, test time was about 1.000000 seconds 00:31:56.647 00:31:56.647 Latency(us) 00:31:56.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.648 =================================================================================================================== 00:31:56.648 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:56.648 16:09:55 keyring_file -- common/autotest_common.sh@970 -- # wait 3968081 00:31:56.906 16:09:55 keyring_file -- keyring/file.sh@21 -- # killprocess 3966338 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3966338 ']' 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3966338 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3966338 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3966338' 00:31:56.906 killing process with pid 3966338 00:31:56.906 16:09:55 keyring_file -- common/autotest_common.sh@965 -- # kill 3966338 00:31:56.906 [2024-05-15 16:09:55.408370] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:56.907 [2024-05-15 16:09:55.408409] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:56.907 16:09:55 keyring_file -- common/autotest_common.sh@970 -- # wait 3966338 00:31:57.474 00:31:57.474 real 0m12.168s 00:31:57.474 user 0m27.821s 00:31:57.474 sys 0m3.374s 00:31:57.474 16:09:55 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:57.474 16:09:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 ************************************ 00:31:57.474 END TEST keyring_file 00:31:57.474 ************************************ 00:31:57.474 16:09:55 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:31:57.474 16:09:55 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:57.474 16:09:55 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:31:57.474 16:09:55 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:57.474 16:09:55 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:57.474 16:09:55 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:57.474 16:09:55 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:31:57.474 16:09:55 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:31:57.474 16:09:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:57.474 16:09:55 -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 16:09:55 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:31:57.474 16:09:55 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:31:57.474 16:09:55 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:31:57.474 16:09:55 -- common/autotest_common.sh@10 -- # set +x 00:32:04.035 INFO: APP EXITING 00:32:04.035 INFO: killing all VMs 00:32:04.035 INFO: killing vhost app 00:32:04.035 INFO: EXIT DONE 00:32:06.562 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:06.562 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:32:09.848 Cleaning 00:32:09.848 Removing: /var/run/dpdk/spdk0/config 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:09.848 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:09.848 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:09.848 Removing: /var/run/dpdk/spdk1/config 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:09.848 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:09.848 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:09.848 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:09.848 Removing: /var/run/dpdk/spdk2/config 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:09.848 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:09.848 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:09.848 Removing: /var/run/dpdk/spdk3/config 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:09.848 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:09.848 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:09.848 Removing: /var/run/dpdk/spdk4/config 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:09.848 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:09.848 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:09.848 Removing: /dev/shm/bdev_svc_trace.1 00:32:09.848 Removing: /dev/shm/nvmf_trace.0 00:32:09.848 Removing: /dev/shm/spdk_tgt_trace.pid3561644 00:32:09.848 Removing: /var/run/dpdk/spdk0 00:32:09.848 Removing: /var/run/dpdk/spdk1 00:32:09.848 Removing: /var/run/dpdk/spdk2 00:32:09.848 Removing: /var/run/dpdk/spdk3 00:32:09.848 Removing: /var/run/dpdk/spdk4 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3559174 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3560426 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3561644 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3562353 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3563337 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3563547 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3564575 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3564829 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3565001 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3566691 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3568139 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3568456 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3568779 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3569144 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3569483 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3569730 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3570014 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3570323 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3571440 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3574496 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3574895 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3575195 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3575275 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3575915 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3576048 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3576614 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3576872 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3577176 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3577200 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3577486 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3577733 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3578123 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3578413 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3578735 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3579037 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3579068 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3579381 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3579664 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3579913 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3580176 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3580418 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3580665 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3580899 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3581146 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3581413 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3581703 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3581988 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3582269 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3582558 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3582845 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3583130 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3583411 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3583696 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3583986 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3584268 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3584557 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3584844 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3584913 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3585265 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3589338 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3636087 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3640837 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3651756 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3657362 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3661861 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3662599 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3674899 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3674901 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3675814 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3676750 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3677563 00:32:09.848 Removing: /var/run/dpdk/spdk_pid3678348 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3678358 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3678625 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3678643 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3678714 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3679695 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3680501 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3681518 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3682089 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3682099 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3682368 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3683547 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3684691 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3694058 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3694477 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3698878 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3705104 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3707888 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3718882 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3728302 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3730024 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3731085 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3749377 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3753364 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3778022 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3782830 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3784428 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3786393 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3786566 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3786836 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3787115 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3787699 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3789638 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3790690 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3791262 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3793526 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3794253 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3795040 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3799367 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3809871 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3813995 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3821078 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3822557 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3824066 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3828648 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3832916 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3840789 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3840903 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3845639 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3845742 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3845986 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3846510 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3846515 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3851239 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3851724 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3856413 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3859258 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3865007 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3871303 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3880117 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3887753 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3887785 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3906687 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3907413 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3908045 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3908767 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3909627 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3910365 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3910978 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3911621 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3916630 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3916923 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3923252 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3923521 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3925853 00:32:10.108 Removing: /var/run/dpdk/spdk_pid3933994 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3934104 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3939501 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3941549 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3943663 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3944738 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3946826 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3948016 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3957627 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3958167 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3958702 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3961162 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3961696 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3962231 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3966338 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3966379 00:32:10.367 Removing: /var/run/dpdk/spdk_pid3968081 00:32:10.367 Clean 00:32:10.367 16:10:08 -- common/autotest_common.sh@1447 -- # return 0 00:32:10.367 16:10:08 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:32:10.367 16:10:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.367 16:10:08 -- common/autotest_common.sh@10 -- # set +x 00:32:10.367 16:10:08 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:32:10.367 16:10:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.367 16:10:08 -- common/autotest_common.sh@10 -- # set +x 00:32:10.367 16:10:08 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:10.367 16:10:08 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:10.367 16:10:08 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:10.367 16:10:08 -- spdk/autotest.sh@387 -- # hash lcov 00:32:10.367 16:10:08 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:10.367 16:10:08 -- spdk/autotest.sh@389 -- # hostname 00:32:10.367 16:10:08 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:10.625 geninfo: WARNING: invalid characters removed from testname! 00:32:32.578 16:10:29 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:33.145 16:10:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:35.050 16:10:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:36.423 16:10:34 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:38.320 16:10:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:40.218 16:10:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:41.760 16:10:39 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:41.760 16:10:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.760 16:10:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:41.760 16:10:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.760 16:10:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.760 16:10:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.760 16:10:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.760 16:10:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.760 16:10:40 -- paths/export.sh@5 -- $ export PATH 00:32:41.760 16:10:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.760 16:10:40 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:41.760 16:10:40 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:41.760 16:10:40 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715782240.XXXXXX 00:32:41.760 16:10:40 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715782240.9O0kxI 00:32:41.760 16:10:40 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:41.760 16:10:40 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:32:41.760 16:10:40 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:41.760 16:10:40 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:41.760 16:10:40 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:41.760 16:10:40 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:41.760 16:10:40 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:32:41.760 16:10:40 -- common/autotest_common.sh@10 -- $ set +x 00:32:41.760 16:10:40 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:41.760 16:10:40 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:41.760 16:10:40 -- pm/common@17 -- $ local monitor 00:32:41.760 16:10:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.760 16:10:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.760 16:10:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.760 16:10:40 -- pm/common@21 -- $ date +%s 00:32:41.760 16:10:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.760 16:10:40 -- pm/common@21 -- $ date +%s 00:32:41.760 16:10:40 -- pm/common@25 -- $ sleep 1 00:32:41.760 16:10:40 -- pm/common@21 -- $ date +%s 00:32:41.760 16:10:40 -- pm/common@21 -- $ date +%s 00:32:41.760 16:10:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715782240 00:32:41.760 16:10:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715782240 00:32:41.760 16:10:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715782240 00:32:41.760 16:10:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715782240 00:32:41.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715782240_collect-vmstat.pm.log 00:32:41.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715782240_collect-cpu-load.pm.log 00:32:41.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715782240_collect-cpu-temp.pm.log 00:32:41.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715782240_collect-bmc-pm.bmc.pm.log 00:32:42.693 16:10:41 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:42.693 16:10:41 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:32:42.693 16:10:41 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:42.693 16:10:41 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:42.693 16:10:41 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:42.693 16:10:41 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:42.693 16:10:41 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:42.693 16:10:41 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:42.693 16:10:41 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:42.693 16:10:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:42.693 16:10:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:42.693 16:10:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:42.693 16:10:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:42.693 16:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:42.693 16:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:42.693 16:10:41 -- pm/common@44 -- $ pid=3981472 00:32:42.693 16:10:41 -- pm/common@50 -- $ kill -TERM 3981472 00:32:42.693 16:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:42.693 16:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:42.693 16:10:41 -- pm/common@44 -- $ pid=3981474 00:32:42.693 16:10:41 -- pm/common@50 -- $ kill -TERM 3981474 00:32:42.693 16:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:42.693 16:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:42.693 16:10:41 -- pm/common@44 -- $ pid=3981476 00:32:42.693 16:10:41 -- pm/common@50 -- $ kill -TERM 3981476 00:32:42.693 16:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:42.693 16:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:42.693 16:10:41 -- pm/common@44 -- $ pid=3981502 00:32:42.693 16:10:41 -- pm/common@50 -- $ sudo -E kill -TERM 3981502 00:32:42.693 + [[ -n 3449265 ]] 00:32:42.693 + sudo kill 3449265 00:32:42.704 [Pipeline] } 00:32:42.723 [Pipeline] // stage 00:32:42.728 [Pipeline] } 00:32:42.746 [Pipeline] // timeout 00:32:42.751 [Pipeline] } 00:32:42.769 [Pipeline] // catchError 00:32:42.774 [Pipeline] } 00:32:42.794 [Pipeline] // wrap 00:32:42.800 [Pipeline] } 00:32:42.815 [Pipeline] // catchError 00:32:42.824 [Pipeline] stage 00:32:42.826 [Pipeline] { (Epilogue) 00:32:42.841 [Pipeline] catchError 00:32:42.842 [Pipeline] { 00:32:42.857 [Pipeline] echo 00:32:42.858 Cleanup processes 00:32:42.864 [Pipeline] sh 00:32:43.149 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:43.149 3981582 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:43.149 3981922 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:43.162 [Pipeline] sh 00:32:43.441 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:43.441 ++ grep -v 'sudo pgrep' 00:32:43.441 ++ awk '{print $1}' 00:32:43.441 + sudo kill -9 3981582 00:32:43.452 [Pipeline] sh 00:32:43.730 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:43.730 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:48.996 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:52.293 [Pipeline] sh 00:32:52.574 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:52.574 Artifacts sizes are good 00:32:52.587 [Pipeline] archiveArtifacts 00:32:52.593 Archiving artifacts 00:32:52.740 [Pipeline] sh 00:32:53.021 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:53.038 [Pipeline] cleanWs 00:32:53.048 [WS-CLEANUP] Deleting project workspace... 00:32:53.048 [WS-CLEANUP] Deferred wipeout is used... 00:32:53.055 [WS-CLEANUP] done 00:32:53.057 [Pipeline] } 00:32:53.078 [Pipeline] // catchError 00:32:53.091 [Pipeline] sh 00:32:53.368 + logger -p user.info -t JENKINS-CI 00:32:53.396 [Pipeline] } 00:32:53.444 [Pipeline] // stage 00:32:53.463 [Pipeline] } 00:32:53.497 [Pipeline] // node 00:32:53.504 [Pipeline] End of Pipeline 00:32:53.542 Finished: SUCCESS